Modelling Project Success by Alfi, Jiaying, Himanshu, Sara


Drawing

Problem Solving Strategy

The ideal machine learning project involves general flow analysis stages for building a Predictive Model. Steps followed to perform data analysis:

  1. Understanding the problem domain
  2. Data Exploration and Preparation
  3. Feature Engineering
  4. Dimensionality Reduction (or Feature Selection)
  5. Various Model Evaluation and
  6. Hyper-parameter Tuning
  7. Ensembling: Model Selection

STEP 1. Understanding the problem domain

  • Kickstarter - Maintains a global crowdfunding platform focused on creativity (films, music, stage shows, comics, journalism, video games, technology and food-related projects)
  • People who back Kickstarter projects are offered tangible rewards or experiences in exchange for their pledges.

Question: why not a model to predict if a project will be successful, failed or cancelled based on given dataset?
List of possible predicting factors:

  • Total amount to be raised
  • Total duration of the project Campaign
  • Theme of the project
  • Writing style of the project description
  • Length of the project description
  • Project launch time
  • Obviously backers and pledged amount
In [1]:
%reset -f
#Load Pre-requisits
import sys
import os
import math
import pickle
import matplotlib
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import gc

import warnings
warnings.filterwarnings('ignore')
np.set_printoptions(threshold=sys.maxsize)

## Visualization libraries

import plotly.tools as tls
import plotly.offline as py
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
import plotly.graph_objs as go
from collections import Counter

##Text Processing

import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize 
from textblob import TextBlob
import string
import re
stop_words = set(stopwords.words('english')) 
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer

#Feature Selection/Elimination
import statsmodels.formula.api as sm 
from sklearn.feature_selection import RFE 
from sklearn.linear_model import LassoCV

#Bagging and Boosting Algorithms, Evaluation Metric
!pip install imblearn
!pip install scipy
from imblearn.over_sampling import ADASYN
from sklearn.preprocessing import LabelEncoder 
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix

#Algos
from sklearn.ensemble import BaggingClassifier, ExtraTreesClassifier, RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import GradientBoostingClassifier ##SKLearn GBM - slower
import xgboost as xgb
from sklearn.ensemble import AdaBoostClassifier
import lightgbm as lgb
from sklearn.ensemble import VotingClassifier

##DR Tools
from sklearn.decomposition import PCA, TruncatedSVD, KernelPCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA 
from sklearn.pipeline import make_pipeline 
from sklearn.model_selection import cross_val_score
##Hyper

from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
from pprint import pprint
[nltk_data] Downloading package stopwords to
[nltk_data]     C:\Users\hmnsh\AppData\Roaming\nltk_data...
[nltk_data]   Package stopwords is already up-to-date!
[nltk_data] Downloading package punkt to
[nltk_data]     C:\Users\hmnsh\AppData\Roaming\nltk_data...
[nltk_data]   Package punkt is already up-to-date!
Requirement already satisfied: imblearn in c:\programdata\anaconda3\lib\site-packages (0.0)
Requirement already satisfied: imbalanced-learn in c:\programdata\anaconda3\lib\site-packages (from imblearn) (0.4.3)
Requirement already satisfied: scipy>=0.13.3 in c:\programdata\anaconda3\lib\site-packages (from imbalanced-learn->imblearn) (1.1.0)
Requirement already satisfied: scikit-learn>=0.20 in c:\programdata\anaconda3\lib\site-packages (from imbalanced-learn->imblearn) (0.20.3)
Requirement already satisfied: numpy>=1.8.2 in c:\programdata\anaconda3\lib\site-packages (from imbalanced-learn->imblearn) (1.16.2)
Requirement already satisfied: scipy in c:\programdata\anaconda3\lib\site-packages (1.1.0)

STEP 2. Data Exploration and Preparation

  • Verified Individual Column values
  • Class Variable Distribution - (Selected canceled, failed, successful)
     - failed        52.22
     - successful    35.38
     - canceled      10.24
     - undefined      0.94
     - live           0.74
     - suspended      0.49

Cancelled State There are 10% of projects in this dataset are in cancelled state. Since there is no clear reason given in this dataset for Project to get cancelled or no date on which it got cancelled. here, Canceled state should be considered as separate state and not failed.

For Example, It Could be project owner getting funding from somewhere else or the project requirements changed which let him recreate online crowd funding campaign.

In [6]:
print ("Total Projects: ", df_ks.shape[0], "\nTotal Features: ", df_ks.shape[1])
df_ks.head()
Total Projects:  378661 
Total Features:  15
Out[6]:
ID name category main_category currency deadline goal launched pledged state backers country usd pledged usd_pledged_real usd_goal_real
0 1000002330 The Songs of Adelaide & Abullah Poetry Publishing GBP 2015-10-09 1000.0 2015-08-11 12:12:28 0.0 failed 0 GB 0.0 0.0 1533.95
1 1000003930 Greeting From Earth: ZGAC Arts Capsule For ET Narrative Film Film & Video USD 2017-11-01 30000.0 2017-09-02 04:43:57 2421.0 failed 15 US 100.0 2421.0 30000.00
2 1000004038 Where is Hank? Narrative Film Film & Video USD 2013-02-26 45000.0 2013-01-12 00:20:50 220.0 failed 3 US 220.0 220.0 45000.00
3 1000007540 ToshiCapital Rekordz Needs Help to Complete Album Music Music USD 2012-04-16 5000.0 2012-03-17 03:24:11 1.0 failed 1 US 1.0 1.0 5000.00
4 1000011046 Community Film Project: The Art of Neighborhoo... Film & Video Film & Video USD 2015-08-29 19500.0 2015-07-04 08:35:03 1283.0 canceled 14 US 1283.0 1283.0 19500.00

Data Cleaning and Noise Removal

  1. Get rid of unwanted columns (ID, goal, pledged, usd_pledged and currency )
  2. Remove Duplicates if exist
  3. Handle Missing Values, in this case Delete those rows
  4. Get rid of noise above 2200000 goal amount (all failed)
  5. Project launched during 1970 and 2018 (6 rows) can be removed
  6. Misrepresented data such as "N,0"" in country column must be addressed, it will be cleaned as a part of data cleaning.

Note: Name column has 4 Nan whereas usd pledged is got 3797 NaN values. This rows can be directly removed as dataset is big enough to perfrom analysis.

  • Before Cleaning: (378661, 15)
  • After Cleaning: (369678, 10)
In [47]:
def data_clean(df_ks):
    df_ks = df_ks.dropna() ## Drop the rows where at least one element is missing.
    df_ks = df_ks[df_ks["state"].isin(["failed", "successful", 'canceled'])] ## State - Successful and Failed
    df_ks = df_ks.drop(["ID", "currency", "pledged", "usd pledged", "goal"], axis = 1) ##Drop not useful columns
    df_ks = df_ks[df_ks['usd_goal_real']< 2200000] # Remove noise from the data
    return df_ks

print("Before Cleaning:", df_ks.shape)
df_clean = data_clean(df_ks)
print("After Cleaning:", df_clean.shape)
del df_ks ##  releasing system memory
gc.collect()
Before Cleaning: (378661, 15)
After Cleaning: (369678, 10)
Out[47]:
542
In [38]:
df_clean['Goal(USD Millions)'] = (df_clean['usd_goal_real'].astype(float)/1000000).astype(float)
df_clean['Pledged(USD Millions)'] = (df_clean['usd_pledged_real'].astype(float)/1000000).astype(float)

plt.figure(figsize=(12,6))
plt.suptitle('(Exploration) Goal vs Pledged Amount', fontsize=24)
#plt.annotate('After approximate 2000000 goal, none of them were successfull(Noise)', xy=(650000, 960000), xytext=(600000, 840000),arrowprops=dict(facecolor='black', shrink=0.05))
sns.set_style('whitegrid')
sns.set(font_scale=1.4)
ax = sns.scatterplot(x="Goal(USD Millions)", y="Pledged(USD Millions)", s=130, hue='state' , data=df_clean)

plt.show()
df_clean = df_clean.drop(["Goal(USD Millions)", "Pledged(USD Millions)"], axis = 1)

Distributions - Outliers and Skew

Numeric variables such as backers, usd_pledged_real, usd_goal_real are higly right skewed because of so many failed instances not having single backers or pledged amount raised. This will be addressed through data normalization while developing a model.

To explore these data it needs to be transformed and then histogram should be created to visualize distributions.

| skew | goal_real - 12.765938 | Pledged_real - 82.063085 | backers - 86.294188 |

Column usd_goal_real_log usd_pledged_real_log
count 369678.000000 369678.000000
mean 8.632460 5.775453
std 1.671539 3.309677
min 0.009950 0.000000
25% 7.601402 3.526361
50% 8.612685 6.456770
75% 9.662097 8.314587
max 14.591996 16.828050

Minimum goal amount is as small as 0.01

In [51]:
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)

#General Stats
df_clean["usd_goal_real_log"] = np.log(df_clean.usd_goal_real+1)
df_clean["usd_pledged_real_log"] = np.log(df_clean.usd_pledged_real+1)
#df_clean["backers_log"] = np.log(df_clean.backers+1)
# drop by Name
df1 = df_clean.drop(['usd_goal_real', 'usd_pledged_real', 'backers'], axis=1)
#print (df1.describe())
del df1
df_clean.drop(['usd_goal_real_log', 'usd_pledged_real_log'], axis=1, inplace = True)
gc.collect()

#print("Minimum goal amount is as small as 0.01")

#configure_plotly_browser_state()
df_cancel = df_clean[df_clean["state"] == "canceled"]
df_failed = df_clean[df_clean["state"] == "failed"]
df_sucess = df_clean[df_clean["state"] == "successful"]


#First plot
trace0 = go.Histogram(
    x= np.log(df_clean.usd_goal_real+1),
    histnorm='probability', showlegend=False,
    xbins=dict(
        start=-5.0,
        end=19.0,
        size=1),
    autobiny=True)

#Second plot
trace1 = go.Histogram(
    x = np.log(df_clean.usd_pledged_real+1),
    histnorm='probability', showlegend=False,
    xbins=dict(
        start=-1.0,
        end=17.0,
        size=1))

# Add histogram data
x1 = np.log(df_failed['usd_goal_real']+1)
x2 = np.log(df_sucess["usd_goal_real"]+1)
x3 = np.log(df_cancel["usd_goal_real"]+1)

trace3 = go.Histogram(
    x=x1,
    opacity=0.60, nbinsx=30, name='Goals Failed', histnorm='probability'
)
trace4 = go.Histogram(
    x=x2,
    opacity=0.60, nbinsx=30, name='Goals Sucessful', histnorm='probability'
)
trace5 = go.Histogram(
    x=x3,
    opacity=0.60, nbinsx=30, name='Goals Cancelled', histnorm='probability'
)


data = [trace0, trace1, trace3, trace4, trace5]
layout = go.Layout(barmode='overlay')

#Creating the grid
fig = tls.make_subplots(rows=2, cols=2, specs=[ [{'colspan': 2}, None], [{}, {}]],
                          subplot_titles=('Failed, Cancelled and Sucessful Projects',
                                          'Goal','Pledged'))

#setting the figs
fig.append_trace(trace0, 2, 1)
fig.append_trace(trace1, 2, 2)
fig.append_trace(trace3, 1, 1)
fig.append_trace(trace4, 1, 1)
fig.append_trace(trace5, 1, 1)

fig['layout'].update(title="(Data Exploration) Log Transformed Distribuitions",
                     height=500, width=900, barmode='overlay')
iplot(fig)
This is the format of your plot grid:
[ (1,1) x1,y1           -      ]
[ (2,1) x2,y2 ]  [ (2,2) x3,y3 ]

Distributions of Monetory Columns against Class Variable - State

Dataset Amount values are highly right skewed and to view distributions it must be log transformed.

Logarithm: Log of a variable is a common transformation method used to change the shape of distribution of the variable on a distribution plot. It is generally used for reducing right skewness of variables. Though, It can’t be applied to zero or negative values as well.

Distribution shows:

  • Successful Projects had relatively small fundraising goals compare to failed or cancelled Projects.
  • Cancelled and Failed Project goal amount is high after median.
  • 16 % of pledged amount is around 1 USD.

STEP 3. Feature Engineering

  1. Time Data: Launched Year, Launched Month, Launch Day, is_weekend, duration

  2. Categorical Data: Create Dummies for Main Category and Country. Categorical Levels: main_category(15) and category(159) are different level of categories.

  3. Backers - Number of people supporting the project.

  4. Numerical Data: Generate Number of Projects and Mean Goal Amount for each Main category and Sub category, Difference in mean main_category and mean sub category to goal amount. Goal - Total fund needed to execute the project and pledged amount is amount raised so far. usd_pledged_real and usd_pledged goal is USD conversion from different currencies using online conversion API.

  1. Text Features: Text Information: name is project name and different text features can be extracted using feature extraction techniques.

Identify values from Project name column. Extract Length, Percentage of Punctuations, Syllable Count, Character Count, Number of Words, Stopwords Count, Capitalized word counts, Number of numeric values and then Clean the data for plotting word cloud

Time: Launched and deadline can be used to identify and extract time related features.

  • Clean Data Shape: (369670, 11)
  • Added Text Features Shape: (369670, 20)
  • Added Numerical Features Shape: (369670, 66)
In [43]:
def syllable_count(word):
    word = word.lower()
    vowels = "aeiouy"
    count = 0
    if word[0] in vowels:
        count += 1
    for index in range(1, len(word)):
        if word[index] in vowels and word[index - 1] not in vowels:
            count += 1
    if word.endswith("e"):
        count -= 1
    if count == 0:
        count += 1
    return count

def count_punct(text):
    count = sum([1 for char in text if char in string.punctuation])
    return round(count/(len(text) - text.count(" ")), 3)*100

def avg_word(sentence):
  words = sentence.split()
  return (sum(len(word) for word in words)/len(words))

def _clean(txt): #test['name'] = df_ks['name'].apply(_clean)
    '''Make text lowercase, remove text in square brackets, remove punctuation and remove words containing numbers.'''
    txt = txt.lower()
    # punctuation removal 
    txt = ''.join(x for x in txt if x not in string.punctuation)
    txt = re.sub('[%s]' % re.escape(string.punctuation), ' ', txt)
    txt = re.sub('[‘’“”…]', ' ', txt)
    txt = re.sub('\n', ' ', txt)
    txt = re.sub('\w*\d\w*', ' ', txt)

    # stopwords removal  
    word_tokens = word_tokenize(txt)    
    #text_list = [w for w in word_tokens if not w in stop_words]  
    clean_txt = ""
  
    for w in word_tokens:
        if w.lower() not in stop_words:
            clean_txt += " "
            clean_txt += w 
    
    clean_txt = ' '.join(clean_txt.split()) # Removing multiple whitespaces
    noise = ['canceled']
    for ns in noise:
        clean_txt = clean_txt.replace(ns, "")

    return clean_txt

## feature engineering

def features1(projects):
    projects["launched_year"]   = projects["launched"].dt.year
    projects["launched_month"]   = projects["launched"].dt.month
    projects["launched_week"]    = projects["launched"].dt.week
    projects["launched_day"]     = projects["launched"].dt.weekday
    projects["is_weekend"]       = projects["launched_day"].apply(lambda x: 1 if x > 4 else 0)
    #projects["state"]            = projects["state"].apply(lambda x: 1 if x=="successful" else 0)
    projects["duration"]         = projects["deadline"] - projects["launched"]
    projects["duration"]         = projects["duration"].apply(lambda x: int(str(x).split()[0]))
    projects = pd.get_dummies(projects['country']).join(projects)
    projects = pd.get_dummies(projects['main_category']).join(projects)  
    ## label encoding the categorical features
    #projects = pd.concat([projects, pd.get_dummies(projects["main_category"])], axis = 1)
    le = LabelEncoder()
    for c in ["category", "main_category"]:
        projects[c] = le.fit_transform(projects[c])

    ## Generate Count Features related to Category and Main Category
    t2 = projects.groupby("main_category").agg({"usd_goal_real" : "mean", "category" : "sum"}) # Mean and count
    t1 = projects.groupby("category").agg({"usd_goal_real" : "mean", "main_category" : "sum"})
    t2 = t2.reset_index().rename(columns={"usd_goal_real" : "mean_main_category_goal", "category" : "main_category_count"})
    t1 = t1.reset_index().rename(columns={"usd_goal_real" : "mean_category_goal", "main_category" : "category_count"})
    projects = projects.merge(t1, on = "category")
    projects = projects.merge(t2, on = "main_category")
    projects["diff_mean_category_goal"] = projects["mean_category_goal"] - projects["usd_goal_real"]
    projects["diff_mean_category_goal"] = projects["mean_main_category_goal"] - projects["usd_goal_real"]
    projects["diff_pledged_goal_real"] = projects["usd_pledged_real"] - projects["usd_goal_real"]
    projects = projects.drop(["launched", "deadline"], axis = 1)
    return projects

def text_feat(df):
    # Function to calculate length of message excluding space
    df['name_len'] = df['name'].apply(lambda x: len(x) - x.count(" "))
    df['punct%'] = df['name'].apply(lambda x: count_punct(x))
    df["syllable_count"]   = df["name"].apply(lambda x: syllable_count(x))
    df["num_words"]  = df["name"].apply(lambda x: len(x.split()))
    df["num_chars"]  = df["name"].apply(lambda x: len(x.replace(" ","")))
    df['avg_word'] = df['name'].apply(lambda x: avg_word(x))
    df['num_stopwords'] = df['name'].apply(lambda x: len([x for x in x.split() if x in stop_words]))
    df['num_numerics'] = df['name'].apply(lambda x: len([x for x in x.split() if x.isdigit()]))
    df['num_capitalized'] = df['name'].apply(lambda x: len([x for x in x.split() if x.isupper()]))
    df['name'] = df['name'].apply(_clean)
    
    return df


print("Clean Data Shape:", df_clean.shape)
df_text_feat = text_feat(df_clean)
#df_text_feat_tfidf = name_tfidf(df_text_feat)
print("Added Text Features Shape:",df_text_feat.shape)
df_feat = features1(df_text_feat)
print("Added Numerical Features Shape:", df_feat.shape)
#df_feat = category_tfidf(df_text_feat)
#print("Added Category TF-IDF Shape:", df_feat.shape)
Clean Data Shape: (369670, 11)
Added Text Features Shape: (369670, 20)
Added Numerical Features Shape: (369670, 66)
In [47]:
from wordcloud import WordCloud, STOPWORDS

# Thanks : https://www.kaggle.com/aashita/word-clouds-of-various-shapes ##
def plot_wordcloud(text, mask=None, max_words=200, max_font_size=100, title = None, title_size=40, image_color=False):
    stopwords = set(STOPWORDS)
    more_stopwords = {'school', 'miami', 'canceled'}
    stopwords = stopwords.union(more_stopwords)

    wordcloud = WordCloud(background_color='black',
                    stopwords = stopwords,
                    max_words = max_words,
                    max_font_size = max_font_size, 
                    random_state = 42,
                    width=800, 
                    height=400,
                    mask = mask)
    wordcloud.generate(str(text))
    
    #plt.figure(figsize=figure_size)
    if image_color:
        image_colors = ImageColorGenerator(mask);
        plt.imshow(wordcloud.recolor(color_func=image_colors), interpolation="bilinear");
        plt.title(title, fontdict={'size': title_size,  
                                  'verticalalignment': 'bottom'})
    else:
        plt.imshow(wordcloud);
        plt.title(title, fontdict={'size': title_size, 'color': 'black', 'verticalalignment': 'bottom'})
    plt.axis('off');
    plt.tight_layout()  
    

plt.figure(figsize=(16,10))
#plt.suptitle('Bottom Performing Universities and Colleges (Some Campaign not ended)', fontsize=24)

plt.subplot(2,2,1)
plot_wordcloud(df_text_feat["name"], title="Project Name")

plt.subplot(2,2,2)
plot_wordcloud(df_clean["category"], title="Sub-category")

STEP 4. Dimensionality Reduction (or Feature Selection)

1. Low Variance Filter
2. High Correlation filter
3. Backward Elimination
4. Recursive Feature Elimination

Output from variance, correlation, p-value metric does not reduce much and are not helpful. Later, LDA - did not help. Based on Recursive Elimination using RandomForest Classifier - It gives optimal set of features that can be used for training and testing Predictive model.

  • Optimum number of features: 12
  • Score with 12 features: 0.926990
  • Selected Features: Index(['backers', 'usd_pledged_real', 'usd_goal_real', 'name_len', 'punct%','syllable_count', 'num_chars', 'avg_word', 'launched_year','launched_week', 'duration', 'diff_mean_category_goal'], dtype='object')
In [4]:
#Dataframe
df_feat = pd.read_pickle('df_features.pkl')
df_test= df_feat.head(10000)

y = df_test.state # setting output variable 
features = [c for c in df_test.columns if c not in ["state", "name", "diff_pledged_goal_real", 'country']]
X = df_test[features] # choosing initial features
print("Before Balancing Shape X:", X.shape, "y: ", y.shape)
ad = ADASYN()
X_ad, y_ad = ad.fit_sample(X, y)
print("After Balancing Shape X:", X_ad.shape, "y: ", y_ad.shape)
X_train, X_test, y_train, y_test = train_test_split(X_ad,y_ad, test_size = 0.25, random_state = 0)
#Normalizing the features 
sc_X = StandardScaler() 
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
print("Normalize")
#no of features
nof_list=np.arange(1, X.shape[1])            
high_score=0
#Variable to store the optimum features
nof=0           
score_list =[]
for n in range(len(nof_list)):
    model = RandomForestClassifier(criterion='entropy')
    rfe = RFE(model,nof_list[n])
    X_train_rfe = rfe.fit_transform(X_train,y_train)
    X_test_rfe = rfe.transform(X_test)
    model.fit(X_train_rfe,y_train)
    score = model.score(X_test_rfe,y_test)
    score_list.append(score)
    if(score>high_score):
        high_score = score
        nof = nof_list[n]
        print(n, ": ", nof)

print("Optimum number of features: %d" %nof)
print("Score with %d features: %f" % (nof, high_score))

cols = list(X.columns)
model = RandomForestClassifier(criterion='entropy', n_jobs=3)
#Initializing RFE model
rfe = RFE(model, nof)             
#Transforming data using RFE
X_rfe = rfe.fit_transform(X,y)  
#Fitting the data to model
model.fit(X_rfe,y)              
temp = pd.Series(rfe.support_,index = cols)
selected_features_rfe = temp[temp==True].index
print("Selected Features: ", selected_features_rfe)
Before Balancing Shape X: (10000, 62) y:  (10000,)
After Balancing Shape X: (18242, 62) y:  (18242,)
Normalize
0 :  1
1 :  2
2 :  3
3 :  4
4 :  5
5 :  6
6 :  7
7 :  8
8 :  9
11 :  12
Optimum number of features: 12
Score with 12 features: 0.926990
Selected Features:  Index(['backers', 'usd_pledged_real', 'usd_goal_real', 'name_len', 'punct%',
       'syllable_count', 'num_chars', 'avg_word', 'launched_year',
       'launched_week', 'duration', 'diff_mean_category_goal'],
      dtype='object')

STEP 5. Various Model Evaluation

Modelling Classification:-

  • Rebalance class variable of using data balancing technique ADASYN - Over sampling
  • Save Balanced set of selected feature values for later use such that re-execution of all above steps is not necessary.
  • Apply Various Models with default settings and Check Accuracy/Missclassification Rate
  • Model Prediction done on trained and Test dataset to evaluate whether model predicts good on what it learned and whether it is generalizing on unseen data(i.e Test).

Execute Various Classifier Algorithms and Note Accuracy

  1. Model with Default Parameters
  2. Tuned Model
  • Before Balancing Shape X: (369678, 12) y: (369678,)
  • After Balancing Shape X: (584054, 12) y: (584054,)
In [26]:
## define predictors and label 
#Dataframe
df_feat = pd.read_pickle('df_features.pkl')
labelencoder_X = LabelEncoder() 
df_feat['state'] = labelencoder_X.fit_transform(df_feat['state'])
#df_test= df_feat.head(10000)
features = [c for c in df_feat.columns if c in ['backers', 'usd_pledged_real', 'usd_goal_real', 'name_len', 'punct%',
       'syllable_count', 'num_chars', 'avg_word', 'launched_year',
       'launched_week', 'duration', 'diff_mean_category_goal']]
            
'''            ['category', 'backers', 'usd_pledged_real', 'usd_goal_real', 'name_len',
       'punct%', 'syllable_count', 'num_chars', 'avg_word', 'launched_year',
       'launched_month', 'launched_week', 'duration', 'mean_category_goal',
       'category_count', 'mean_main_category_goal', 'diff_mean_category_goal']'''
X = df_feat[features]
y = df_feat.state
print("Before Balancing Shape X:", X.shape, "y: ", y.shape)
ad = ADASYN()
X, y = ad.fit_sample(X, y)
print("After Balancing Shape X:", X.shape, "y: ", y.shape)
#[c for c in df_feat.columns if c in ["usd_pledged_real","usd_goal_real","diff_mean_category_goal"]]
#[c for c in df_feat.columns if c not in ["state", "name", "backers","usd_pledged_real","diff_pledged_goal_real", 'country']]

#Dataframe
#data = pd.read_pickle('dtm.pkl')
# Let's pickle it for later use
#X.to_pickle("X_without_pledged_backers.pkl")
#y.to_pickle("y_without_pledged_backers.pkl")

with open('X_with_pledged_backers_12.pkl','wb') as f:
    pickle.dump(X, f)
    f.close()
with open('y_with_pledged_backers_12.pkl','wb') as f:
    pickle.dump(y, f)
    f.close()
Before Balancing Shape X: (369678, 12) y:  (369678,)
After Balancing Shape X: (584054, 12) y:  (584054,)

Data samples - balanced without ADASYN - 38659 from each class

In [2]:
df_feat = pd.read_pickle('df_features.pkl')
#configure_plotly_browser_state()
df_cancel = df_feat[df_feat["state"] == "canceled"]
df_failed = df_feat[df_feat["state"] == "failed"]
df_sucess = df_feat[df_feat["state"] == "successful"]

print ("cancel shape:",df_cancel.shape)
print ("fail shape:",df_failed.shape)
print ("success shape:",df_sucess.shape)
cancel shape: (38659, 66)
fail shape: (197168, 66)
success shape: (133851, 66)
In [4]:
new_data = new_data.sample(frac=1).reset_index(drop=True)
new_data = pd.concat([df_cancel.head(38659),df_failed.head(38659),df_sucess.head(38659)])
new_data.shape

def ret_percentage(column):
    return round(column.value_counts(normalize=True) * 100,2)

print(ret_percentage(new_data['state']))
failed        33.33
successful    33.33
canceled      33.33
Name: state, dtype: float64
In [20]:
labelencoder_X = LabelEncoder() 
new_data['state'] = labelencoder_X.fit_transform(new_data['state'])

df_test= new_data.head(10000)

print(ret_percentage(df_test['state']))
2    33.62
0    33.45
1    32.93
Name: state, dtype: float64
In [33]:
#Dataframe
#X = pd.read_pickle('X_without_pledged_backers.pkl')
#y = pd.read_pickle('y_without_pledged_backers.pkl')
#RFE - X_without_pledged_backers
#y_without_pledged_backers
with open('X_with_pledged_backers_12_chosen.pkl','rb') as f:
    X = pickle.load(f)
    print(X.shape)
    f.close()
with open('y_with_pledged_backers_12_chosen.pkl','rb') as f:
    y = pickle.load(f)
    print(y.shape)
    f.close()

#Splitting the data into Training Set and Test Set
## prepare training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 2, stratify=y)

#Normalizing the features 
sc_X = StandardScaler() 
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)

#10k 18184,12
(115977, 12)
(115977,)
In [3]:
from sklearn.metrics import accuracy_score
## Generic Execution Function
def model_exec(model, name):
    print('Running Model')
    model.fit(X_train,y_train)

    #Making predictions on the Train and Test Set
    y_train_pred = model.predict(X_train)
    y_pred = model.predict(X_test)

    #Evaluating the predictions using a Confusion Matrix
    print("Training Set Accuracy:")
    print(accuracy_score(y_train_pred, y_train))
    print("Test Set Accuracy:\n", accuracy_score(y_pred, y_test))

#    print(confusion_matrix(y_train, y_train_pred))
    df_cmtr = pd.DataFrame(confusion_matrix(y_train, y_train_pred), index = ['cancelled', 'failed', 'successful'],
                  columns = ['cancelled(p)', 'failed(p)', 'successful(p)'])
    df_cm = pd.DataFrame(confusion_matrix(y_test, y_pred), index = ['cancelled', 'failed', 'successful'],
                  columns = ['cancelled(p)', 'failed(p)', 'successful(p)'])
    plt.figure(figsize=(14,10))
    s_title = name + ' Confusion Matrix'
    plt.suptitle(s_title, fontsize=16)

    plt.subplot(2,2,1)
    plt.gca().set_title('Train Data')
    sns.heatmap(df_cmtr, annot=True, cmap=plt.cm.Reds)
    plt.subplot(2,2,2)
    plt.gca().set_title('Test Data')
    sns.heatmap(df_cm, annot=True, cmap=plt.cm.Reds)
    plt.show()

    # save the model to disk
    filename = 'Models/'+name+'.sav'
    pickle.dump(model, open(filename, 'wb'))
    return model

def model_exec_final(model, name, X_md, y_md):
    print('Running Model')
    print("Shape of the Train data:", X_md.shape)
    model.fit(X_md,y_md)
    print("Whole data Fit:\n%r" % model.data)
    #Making predictions on the Train and Test Set
    y_train_pred = model.predict(X_train)
    y_pred = model.predict(X_test)

    #Evaluating the predictions using a Confusion Matrix
    print("Training Set Accuracy:")
    print(accuracy_score(y_train_pred, y_train))
    print("Test Set Accuracy:\n", accuracy_score(y_pred, y_test))

    #print(confusion_matrix(y_train, y_train_pred))
    df_cmtr = pd.DataFrame(confusion_matrix(y_train, y_train_pred), index = ['cancelled', 'failed', 'successful'],
                  columns = ['cancelled(p)', 'failed(p)', 'successful(p)'])
    df_cm = pd.DataFrame(confusion_matrix(y_test, y_pred), index = ['cancelled', 'failed', 'successful'],
                  columns = ['cancelled(p)', 'failed(p)', 'successful(p)'])
    plt.figure(figsize=(14,10))
    s_title = name + ' Confusion Matrix'
    plt.suptitle(s_title, fontsize=16)

    plt.subplot(2,2,1)
    plt.gca().set_title('Train Data')
    sns.heatmap(df_cmtr, annot=True, cmap=plt.cm.Reds)
    plt.subplot(2,2,2)
    plt.gca().set_title('Test Data')
    sns.heatmap(df_cm, annot=True, cmap=plt.cm.Reds)
    plt.show()

    # save the model to disk
    filename = 'Models/'+name+'.sav'
    pickle.dump(model, open(filename, 'wb'))
    return model

def feature_imp(model, plt_title, plot):
    # Feature Importance graph
    features = ['backers', 'usd_pledged_real', 'usd_goal_real', 'name_len', 'punct%',
           'syllable_count', 'num_chars', 'avg_word', 'launched_year',
           'launched_week', 'duration', 'diff_mean_category_goal']
    importances = model.feature_importances_
    indices = np.argsort(importances)
    plt.figure(figsize=(16,10))
    plot.title(plt_title)
    plot.barh(range(len(indices)), importances[indices], color='b', align='center')
    plot.yticks(range(len(indices)), [features[i] for i in indices])
    plot.xlabel('Relative Importance')
    plot.show()

RandomForestClassifier

In [10]:
#Fitting Classifier to Training Set. Create a classifier object here and call it classifierObj 
RForest = RandomForestClassifier(criterion='entropy') 
RForest = model_exec(RForest, 'RForest')
#10k - Train 0.9972, Test 0.9472
Running Model
Training Set Accuracy:
0.9931667043899074
Test Set Accuracy:
 0.8735557854802553
In [11]:
#Fitting Classifier to Training Set. Create a classifier object here and call it classifierObj 
TunedRForest = RandomForestClassifier(criterion='entropy', max_depth= 60, n_estimators= 1000, bootstrap=False, random_state=2)
TunedRForest = model_exec(TunedRForest, 'TunedRForest')
Running Model
Training Set Accuracy:
1.0
Test Set Accuracy:
 0.8864890498361786
In [35]:
feature_imp(TunedRForest, "TunedRForest", plt)
In [34]:
# Training predictions (to demonstrate overfitting)
train_rf_probs = TunedRForest.predict_proba(X_train)[:, 1]

# Testing predictions (to determine performance)
rf_probs = TunedRForest.predict_proba(X_test)[:, 1]

n_nodes = []
max_depths = []

# Stats about the trees in random forest
for ind_tree in TunedRForest.estimators_:
    n_nodes.append(ind_tree.tree_.node_count)
    max_depths.append(ind_tree.tree_.max_depth)
    
print(f'Average number of nodes {int(np.mean(n_nodes))}')
print(f'Average maximum depth {int(np.mean(max_depths))}')
Average number of nodes 3192
Average maximum depth 29

STEP 6. Hyper Parameter Tuning using Randomized Search CV -

  • Grid Search looks at all possible combinations of values specified for hyperparameters and gives the best combination.
  • RandomizedSearchCV - Randomly selects combination of different parameters to identify best set of parameters.

Example - RandomForestClassifier

  1. To improve the predictive power of the model :

    • max_features - number of features Random Forest is allowed to try in individual tree (increases perfromance but computationally expensive)
    • n_estimators - increases perfromance but computationally expensive
  1. Features which will make the model training easier
    • n_jobs : -1 uses all CPUs
    • random_state : Easy to replicate. A definite value of random_state will always produce same results if given with same parameters and training data.
In [36]:
def hyper_tuning(params, model):
    # Look at parameters used by our current forest
    print('Parameters currently in use:\n')
    pprint(model.get_params())
    print('Parameters Grid:\n')
    pprint(params)    
    # Random search of parameters, using 3 fold cross validation, 
    # search across 100 different combinations, and use all available cores
    random = RandomizedSearchCV(estimator = model, param_distributions = params, n_iter = 100, cv = 3, verbose=2, random_state=1, n_jobs = 3)
    # Fit the random search model
    random.fit(X_train, y_train)
    print('Best:\n')
    print('Score: ', random.best_score_)
    print ('Estimator: ',random.best_estimator_)
    return random


def plot_hyper(train_res, test_res, paralist, namex, namey):
    
    plt.figure(figsize=(12,6))
    # Draw lines
    plt.plot(paralist, train_res, '--', color="#111111",  label="Training score")
    plt.plot(paralist, test_res, color="#111111", label="Validation score")

    # Create plot
    plt.title("Accuracy Curve")
    plt.xlabel(namex), plt.ylabel(namey), plt.legend(loc="best")
    plt.xticks(paralist)
    plt.tight_layout()

    plt.show()
In [23]:
#Random Forest

RFTuning = RandomForestClassifier(max_depth=60, bootstrap=False, criterion='entropy')

# Number of trees in random forest, linspace returns evenly spaced number
n_estimators = [int(x) for x in np.linspace(start = 10, stop = 775, num = 25)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 5)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [int(x) for x in np.linspace(start = 20, stop = 80, num = 5)]
# Minimum number of samples required at each leaf node
min_samples_leaf = [int(x) for x in np.linspace(start = 1, stop = 80, num = 5)]
# Method of selecting samples for training each tree
bootstrap = [False]
# Create the random grid
random_grid = {'n_estimators': n_estimators}

results = hyper_tuning(random_grid, RFTuning)
Parameters currently in use:

{'bootstrap': False,
 'class_weight': None,
 'criterion': 'entropy',
 'max_depth': 60,
 'max_features': 'auto',
 'max_leaf_nodes': None,
 'min_impurity_decrease': 0.0,
 'min_impurity_split': None,
 'min_samples_leaf': 1,
 'min_samples_split': 2,
 'min_weight_fraction_leaf': 0.0,
 'n_estimators': 'warn',
 'n_jobs': None,
 'oob_score': False,
 'random_state': None,
 'verbose': 0,
 'warm_start': False}
Parameters Grid:

{'n_estimators': [10,
                  41,
                  73,
                  105,
                  137,
                  169,
                  201,
                  233,
                  265,
                  296,
                  328,
                  360,
                  392,
                  424,
                  456,
                  488,
                  520,
                  551,
                  583,
                  615,
                  647,
                  679,
                  711,
                  743,
                  775]}
Fitting 3 folds for each of 25 candidates, totalling 75 fits
[Parallel(n_jobs=3)]: Using backend LokyBackend with 3 concurrent workers.
[Parallel(n_jobs=3)]: Done  35 tasks      | elapsed:  1.7min
[Parallel(n_jobs=3)]: Done  75 out of  75 | elapsed:  7.3min finished
Best:

Score:  0.9361380353337458
Estimator:  RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=711, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False)
In [24]:
pd.DataFrame(results.cv_results_).sort_values('mean_test_score', ascending=False).head()
Out[24]:
mean_fit_time std_fit_time mean_score_time std_score_time param_n_estimators params split0_test_score split1_test_score split2_test_score mean_test_score std_test_score rank_test_score split0_train_score split1_train_score split2_train_score mean_train_score std_train_score
22 30.136463 0.057520 0.624862 0.022085 711 {'n_estimators': 711} 0.940825 0.937925 0.929662 0.936138 0.004729 1 1.0 1.0 1.0 1.0 0.0
7 9.664397 0.051548 0.187447 0.000011 233 {'n_estimators': 233} 0.939381 0.938544 0.930074 0.936001 0.004204 2 1.0 1.0 1.0 1.0 0.0
19 24.626379 0.231953 0.494693 0.026554 615 {'n_estimators': 615} 0.938969 0.938750 0.930074 0.935932 0.004142 3 1.0 1.0 1.0 1.0 0.0
23 30.876124 0.116481 0.667230 0.027538 743 {'n_estimators': 743} 0.938969 0.938544 0.930074 0.935863 0.004096 4 1.0 1.0 1.0 1.0 0.0
21 26.650421 0.165978 0.608592 0.057565 679 {'n_estimators': 679} 0.939588 0.937719 0.929868 0.935726 0.004211 5 1.0 1.0 1.0 1.0 0.0
In [37]:
plot_hyper(results.cv_results_['mean_train_score'], results.cv_results_['mean_test_score'], list(range(0,25)), 'Para_set', 'Accuracy')

STEP 7. KFold and Ensembling: Model Selection

KFold

A model can either suffer from underfitting (high bias) if the model is too simple, or it can overfit the training data (high variance) if the model is too complex for the underlying training data.

Ensemble

The main principle behind ensemble modelling is to group weak learners together to form one strong learner. Combine the decisions from multiple models to improve the overall performance.

Errrors in Model - variance, bias, noise

  1. Max Voting
  2. Averaging
  3. Weighted Averaging
  4. Bagging to decrease the model’s variance; RandomForest (Train Data Bootstraping and Aggregation)
  5. Boosting to decreasing the model’s bias, and; XGBoost (New model is trained from the errors of previous learners)
  6. Stacking to increasing the predictive force of the classifier. (new model is trained from the combined predictions of two (or more) previous model.)
In [4]:
##Selected Models - Computationally efficient and Better Performance


DecTree = DecisionTreeClassifier(criterion='entropy', random_state=2) 
TunedRForest = RandomForestClassifier(criterion='entropy', max_depth= 60, n_estimators= 56, bootstrap=False)
TunedETCLassifier = ExtraTreesClassifier(criterion='entropy', max_depth= 60, n_estimators= 56, bootstrap=False)
TunedBaggingC = BaggingClassifier(DecisionTreeClassifier(random_state=2, criterion='entropy', max_depth= 20))
Tunedmodelxgb=xgb.XGBClassifier(n_estimators=68, max_depth=60, learning_rate=0.1, subsample=0.5, objective ='multi:softmax')
simplegbm_model = lgb.LGBMClassifier(objective = 'multiclass', learning_rate=0.08)
In [39]:
voting = VotingClassifier(estimators=[('TunedRForest', TunedRForest), ('Tunedmodelxgb',Tunedmodelxgb), ('TunedBaggingC',TunedBaggingC)], weights=[1, 2, 1], voting='hard', n_jobs=3)  ## Re-trains each model
voting = model_exec(voting, 'voting')
Running Model
Training Set Accuracy:
0.9997559056777393
Test Set Accuracy:
 0.9406930557457882
In [24]:
#Stacking
!pip install mlens

from itertools import combinations
from mlens.ensemble import SuperLearner
from sklearn.metrics import accuracy_score

names = [TunedRForest, TunedETCLassifier, Tunedmodelxgb, simplegbm_model, TunedBaggingC]
m_names = ['TunedRForest', 'TunedETCLassifier', 'Tunedmodelxgb', 'simplegbm_model', 'TunedBaggingC']

all_combs = []
for r in range(1, len(names) + 1):
    all_combs += list(combinations(names, r))


ncombs = []
for r in range(1, len(m_names) + 1):
    ncombs += list(combinations(m_names, r))

    
best_combination = [0.00, ""]
train_results = []
test_results = []

for idx, clf in enumerate(all_combs):
    #print(idx, val)
    print("Running: ", idx)
    ensemble = SuperLearner(scorer = accuracy_score, random_state = 2, folds = 5, verbose = 2, n_jobs=3)
    ensemble.add(list(clf))
    ensemble.add_meta(DecTree)
    ensemble.fit(X_train_10k, y_train_10k)
    train_preds = ensemble.predict(X_train_10k)
    preds = ensemble.predict(X_test_10k)
    t_acc = accuracy_score(train_preds, y_train_10k)
    train_results.append(t_acc)
    accuracy = accuracy_score(preds, y_test_10k)
    test_results.append(accuracy)
    print("Train: ",t_acc, " Test: ", accuracy)
    print("Whole data Fit:\n%r" % ensemble.data)
    
    if accuracy > best_combination[0]:
        best_combination[0] = accuracy
        best_combination[1] = list(clf)
    
    print("Accuracy score: ", accuracy, list(clf))

print("\n\nBest stacking model is ", best_combination[1], " with accuracy of: ", best_combination[0])
Requirement already satisfied: mlens in c:\programdata\anaconda3\lib\site-packages (0.2.3)
Requirement already satisfied: scipy>=0.17 in c:\programdata\anaconda3\lib\site-packages (from mlens) (1.1.0)
Requirement already satisfied: numpy>=1.11 in c:\programdata\anaconda3\lib\site-packages (from mlens) (1.16.2)
Running:  0

Fitting 2 layers
Processing layer-1             done | 00:00:02
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:02

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  1.0  Test:  0.8655
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  randomforestclassifier       0.84     0.01  0.99  0.05  0.03  0.01

Accuracy score:  0.8655 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False)]
Running:  1

Fitting 2 layers
Processing layer-1             done | 00:00:01
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:01

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  1.0  Test:  0.8205
Whole data Fit:
                                 score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  extratreesclassifier       0.81     0.01  0.43  0.01  0.03  0.01

Accuracy score:  0.8205 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False)]
Running:  2

Fitting 2 layers
Processing layer-1             done | 00:00:10
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:10

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.998125  Test:  0.8835
Whole data Fit:
                          score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  xgbclassifier       0.86     0.01  4.23  0.47  0.04  0.00

Accuracy score:  0.8835 [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]
Running:  3

Fitting 2 layers
Processing layer-1             done | 00:00:01
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:01

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.94625  Test:  0.8955
Whole data Fit:
                           score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  lgbmclassifier       0.88     0.01  0.64  0.05  0.04  0.00

Accuracy score:  0.8955 [LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  4

Fitting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.990875  Test:  0.849
Whole data Fit:
                              score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier       0.84     0.01  0.35  0.02  0.00  0.00

Accuracy score:  0.849 [BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  5

Fitting 2 layers
Processing layer-1             done | 00:00:03
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:03

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  1.0  Test:  0.865
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  extratreesclassifier         0.80     0.01  0.44  0.02  0.04  0.01
layer-1  randomforestclassifier       0.84     0.01  1.05  0.02  0.02  0.00

Accuracy score:  0.865 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False)]
Running:  6

Fitting 2 layers
Processing layer-1             done | 00:00:11
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:12

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.998125  Test:  0.8835
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  randomforestclassifier       0.84     0.01  1.06  0.03  0.03  0.01
layer-1  xgbclassifier                0.86     0.01  4.16  0.10  0.04  0.00

Accuracy score:  0.8835 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]
Running:  7

Fitting 2 layers
Processing layer-1             done | 00:00:03
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:03

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.94625  Test:  0.8955
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  lgbmclassifier               0.88     0.01  0.64  0.02  0.04  0.00
layer-1  randomforestclassifier       0.84     0.01  1.03  0.04  0.02  0.00

Accuracy score:  0.8955 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  8

Fitting 2 layers
Processing layer-1             done | 00:00:02
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:03

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.99975  Test:  0.8655
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier            0.83     0.01  0.37  0.04  0.00  0.00
layer-1  randomforestclassifier       0.84     0.01  0.96  0.03  0.02  0.00

Accuracy score:  0.8655 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  9

Fitting 2 layers
Processing layer-1             done | 00:00:09
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:10

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.998125  Test:  0.8835
Whole data Fit:
                                 score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  extratreesclassifier       0.81     0.01  0.42  0.01  0.03  0.01
layer-1  xgbclassifier              0.86     0.01  3.84  0.08  0.04  0.00

Accuracy score:  0.8835 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]
Running:  10

Fitting 2 layers
Processing layer-1             done | 00:00:02
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:02

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.94625  Test:  0.8955
Whole data Fit:
                                 score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  extratreesclassifier       0.81     0.01  0.42  0.01  0.03  0.01
layer-1  lgbmclassifier             0.88     0.01  0.63  0.02  0.04  0.00

Accuracy score:  0.8955 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  11

Fitting 2 layers
Processing layer-1             done | 00:00:01
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:01

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.990875  Test:  0.858
Whole data Fit:
                                 score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier          0.84     0.01  0.34  0.02  0.00  0.00
layer-1  extratreesclassifier       0.81     0.01  0.41  0.02  0.03  0.00

Accuracy score:  0.858 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  12

Fitting 2 layers
Processing layer-1             done | 00:00:10
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:10

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.948125  Test:  0.8955
Whole data Fit:
                           score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  lgbmclassifier       0.88     0.01  0.62  0.02  0.04  0.00
layer-1  xgbclassifier        0.86     0.01  4.16  0.08  0.04  0.00

Accuracy score:  0.8955 [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  13

Fitting 2 layers
Processing layer-1             done | 00:00:10
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:11

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.998125  Test:  0.8835
Whole data Fit:
                              score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier       0.83     0.01  0.39  0.01  0.00  0.00
layer-1  xgbclassifier           0.86     0.01  4.32  0.09  0.04  0.00

Accuracy score:  0.8835 [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  14

Fitting 2 layers
Processing layer-1             done | 00:00:02
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:02

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.94625  Test:  0.8955
Whole data Fit:
                              score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier       0.83     0.01  0.38  0.02  0.01  0.00
layer-1  lgbmclassifier          0.88     0.01  0.60  0.02  0.04  0.00

Accuracy score:  0.8955 [LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  15

Fitting 2 layers
Processing layer-1             done | 00:00:11
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:12

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.998125  Test:  0.8835
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  extratreesclassifier         0.81     0.01  0.44  0.01  0.03  0.00
layer-1  randomforestclassifier       0.84     0.01  0.96  0.01  0.02  0.00
layer-1  xgbclassifier                0.86     0.01  3.84  0.07  0.04  0.00

Accuracy score:  0.8835 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]
Running:  16

Fitting 2 layers
Processing layer-1             done | 00:00:04
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:04

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.948625  Test:  0.8955
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  extratreesclassifier         0.81     0.01  0.42  0.01  0.03  0.00
layer-1  lgbmclassifier               0.88     0.01  0.61  0.01  0.04  0.00
layer-1  randomforestclassifier       0.84     0.01  0.93  0.02  0.02  0.01

Accuracy score:  0.8955 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  17

Fitting 2 layers
Processing layer-1             done | 00:00:03
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:03

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.999875  Test:  0.87
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier            0.84     0.01  0.33  0.01  0.01  0.01
layer-1  extratreesclassifier         0.80     0.01  0.42  0.02  0.03  0.01
layer-1  randomforestclassifier       0.84     0.01  0.92  0.02  0.03  0.01

Accuracy score:  0.87 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  18

Fitting 2 layers
Processing layer-1             done | 00:00:13
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:13

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.946625  Test:  0.894
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  lgbmclassifier               0.88     0.01  0.70  0.02  0.04  0.00
layer-1  randomforestclassifier       0.84     0.01  1.03  0.01  0.02  0.00
layer-1  xgbclassifier                0.86     0.01  4.13  0.18  0.04  0.00

Accuracy score:  0.894 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  19

Fitting 2 layers
Processing layer-1             done | 00:00:11
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:11

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.999125  Test:  0.8825
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier            0.84     0.01  0.34  0.01  0.00  0.00
layer-1  randomforestclassifier       0.84     0.01  0.93  0.01  0.02  0.00
layer-1  xgbclassifier                0.86     0.01  3.81  0.03  0.04  0.00

Accuracy score:  0.8825 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  20

Fitting 2 layers
Processing layer-1             done | 00:00:04
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:04

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.94625  Test:  0.8945
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier            0.83     0.01  0.34  0.01  0.00  0.00
layer-1  lgbmclassifier               0.88     0.01  0.62  0.02  0.04  0.00
layer-1  randomforestclassifier       0.84     0.01  0.94  0.02  0.02  0.00

Accuracy score:  0.8945 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  21

Fitting 2 layers
Processing layer-1             done | 00:00:11
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:12

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.946625  Test:  0.8955
Whole data Fit:
                                 score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  extratreesclassifier       0.80     0.01  0.46  0.02  0.03  0.00
layer-1  lgbmclassifier             0.88     0.01  0.76  0.04  0.04  0.00
layer-1  xgbclassifier              0.86     0.01  4.04  0.09  0.04  0.00

Accuracy score:  0.8955 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  22

Fitting 2 layers
Processing layer-1             done | 00:00:11
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:11

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.998125  Test:  0.8835
Whole data Fit:
                                 score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier          0.83     0.02  0.40  0.02  0.01  0.01
layer-1  extratreesclassifier       0.81     0.01  0.47  0.02  0.04  0.01
layer-1  xgbclassifier              0.86     0.01  4.06  0.16  0.04  0.00

Accuracy score:  0.8835 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  23

Fitting 2 layers
Processing layer-1             done | 00:00:03
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:03

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.948625  Test:  0.8955
Whole data Fit:
                                 score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier          0.84     0.01  0.35  0.02  0.00  0.00
layer-1  extratreesclassifier       0.80     0.01  0.42  0.01  0.03  0.00
layer-1  lgbmclassifier             0.88     0.01  0.61  0.02  0.04  0.00

Accuracy score:  0.8955 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  24

Fitting 2 layers
Processing layer-1             done | 00:00:10
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:11

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.947875  Test:  0.8955
Whole data Fit:
                              score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier       0.84     0.02  0.34  0.01  0.00  0.00
layer-1  lgbmclassifier          0.88     0.01  0.62  0.01  0.04  0.00
layer-1  xgbclassifier           0.86     0.01  3.91  0.09  0.04  0.00

Accuracy score:  0.8955 [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  25

Fitting 2 layers
Processing layer-1             done | 00:00:13
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:13

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.947625  Test:  0.8955
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  extratreesclassifier         0.80     0.01  0.41  0.01  0.03  0.01
layer-1  lgbmclassifier               0.88     0.01  0.62  0.02  0.04  0.01
layer-1  randomforestclassifier       0.84     0.01  0.94  0.01  0.02  0.00
layer-1  xgbclassifier                0.86     0.01  3.82  0.03  0.04  0.00

Accuracy score:  0.8955 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  26

Fitting 2 layers
Processing layer-1             done | 00:00:12
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:13

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.99825  Test:  0.8845
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier            0.84     0.01  0.34  0.01  0.01  0.01
layer-1  extratreesclassifier         0.80     0.01  0.42  0.01  0.03  0.01
layer-1  randomforestclassifier       0.84     0.01  0.94  0.02  0.02  0.01
layer-1  xgbclassifier                0.86     0.01  3.98  0.17  0.04  0.00

Accuracy score:  0.8845 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  27

Fitting 2 layers
Processing layer-1             done | 00:00:05
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:05

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.946625  Test:  0.893
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier            0.83     0.01  0.38  0.02  0.01  0.00
layer-1  extratreesclassifier         0.80     0.01  0.47  0.02  0.03  0.00
layer-1  lgbmclassifier               0.88     0.01  0.70  0.02  0.04  0.00
layer-1  randomforestclassifier       0.84     0.01  1.08  0.03  0.02  0.00

Accuracy score:  0.893 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  28

Fitting 2 layers
Processing layer-1             done | 00:00:14
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:14

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.948  Test:  0.894
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier            0.83     0.01  0.40  0.01  0.00  0.00
layer-1  lgbmclassifier               0.88     0.01  0.69  0.04  0.04  0.00
layer-1  randomforestclassifier       0.84     0.01  0.99  0.07  0.03  0.01
layer-1  xgbclassifier                0.86     0.01  4.24  0.19  0.04  0.00

Accuracy score:  0.894 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  29

Fitting 2 layers
Processing layer-1             done | 00:00:12
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:12

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.948125  Test:  0.8955
Whole data Fit:
                                 score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier          0.84     0.01  0.38  0.01  0.01  0.00
layer-1  extratreesclassifier       0.81     0.01  0.44  0.01  0.03  0.01
layer-1  lgbmclassifier             0.88     0.01  0.68  0.03  0.04  0.00
layer-1  xgbclassifier              0.86     0.01  4.12  0.07  0.04  0.00

Accuracy score:  0.8955 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  30

Fitting 2 layers
Processing layer-1             done | 00:00:14
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:14

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Train:  0.94675  Test:  0.8935
Whole data Fit:
                                   score-m  score-s  ft-m  ft-s  pt-m  pt-s
layer-1  baggingclassifier            0.83     0.01  0.41  0.02  0.00  0.00
layer-1  extratreesclassifier         0.81     0.01  0.46  0.02  0.04  0.01
layer-1  lgbmclassifier               0.88     0.01  0.76  0.02  0.04  0.00
layer-1  randomforestclassifier       0.84     0.01  0.99  0.04  0.03  0.01
layer-1  xgbclassifier                0.86     0.01  3.90  0.05  0.04  0.00

Accuracy score:  0.8935 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]


Best stacking model is  [LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]  with accuracy of:  0.8955
In [25]:
plt.figure(figsize=(12,6))
# Draw lines
plt.plot(list(range(0,31)), train_results, '--', color="#111111",  label="Training score")
plt.plot(list(range(0,31)), test_results, color="#111111", label="validation score")

# Create plot
plt.title("Learning Curve")
plt.xlabel("Stacked Model"), plt.ylabel("Accuracy Score"), plt.legend(loc="best")
test = list(range(0,31))
plt.xticks(test)
plt.tight_layout()

plt.show()
In [3]:
from itertools import combinations
m_names = ['TunedRForest', 'TunedETCLassifier', 'Tunedmodelxgb', 'simplegbm_model', 'TunedBaggingC']

ncombs = []
for r in range(1, len(m_names) + 1):
    ncombs += list(combinations(m_names, r))
    
for idx, val in enumerate(ncombs):
    print("Number: ", idx, "Model: ", list(val))
Number:  0 Model:  ['TunedRForest']
Number:  1 Model:  ['TunedETCLassifier']
Number:  2 Model:  ['Tunedmodelxgb']
Number:  3 Model:  ['simplegbm_model']
Number:  4 Model:  ['TunedBaggingC']
Number:  5 Model:  ['TunedRForest', 'TunedETCLassifier']
Number:  6 Model:  ['TunedRForest', 'Tunedmodelxgb']
Number:  7 Model:  ['TunedRForest', 'simplegbm_model']
Number:  8 Model:  ['TunedRForest', 'TunedBaggingC']
Number:  9 Model:  ['TunedETCLassifier', 'Tunedmodelxgb']
Number:  10 Model:  ['TunedETCLassifier', 'simplegbm_model']
Number:  11 Model:  ['TunedETCLassifier', 'TunedBaggingC']
Number:  12 Model:  ['Tunedmodelxgb', 'simplegbm_model']
Number:  13 Model:  ['Tunedmodelxgb', 'TunedBaggingC']
Number:  14 Model:  ['simplegbm_model', 'TunedBaggingC']
Number:  15 Model:  ['TunedRForest', 'TunedETCLassifier', 'Tunedmodelxgb']
Number:  16 Model:  ['TunedRForest', 'TunedETCLassifier', 'simplegbm_model']
Number:  17 Model:  ['TunedRForest', 'TunedETCLassifier', 'TunedBaggingC']
Number:  18 Model:  ['TunedRForest', 'Tunedmodelxgb', 'simplegbm_model']
Number:  19 Model:  ['TunedRForest', 'Tunedmodelxgb', 'TunedBaggingC']
Number:  20 Model:  ['TunedRForest', 'simplegbm_model', 'TunedBaggingC']
Number:  21 Model:  ['TunedETCLassifier', 'Tunedmodelxgb', 'simplegbm_model']
Number:  22 Model:  ['TunedETCLassifier', 'Tunedmodelxgb', 'TunedBaggingC']
Number:  23 Model:  ['TunedETCLassifier', 'simplegbm_model', 'TunedBaggingC']
Number:  24 Model:  ['Tunedmodelxgb', 'simplegbm_model', 'TunedBaggingC']
Number:  25 Model:  ['TunedRForest', 'TunedETCLassifier', 'Tunedmodelxgb', 'simplegbm_model']
Number:  26 Model:  ['TunedRForest', 'TunedETCLassifier', 'Tunedmodelxgb', 'TunedBaggingC']
Number:  27 Model:  ['TunedRForest', 'TunedETCLassifier', 'simplegbm_model', 'TunedBaggingC']
Number:  28 Model:  ['TunedRForest', 'Tunedmodelxgb', 'simplegbm_model', 'TunedBaggingC']
Number:  29 Model:  ['TunedETCLassifier', 'Tunedmodelxgb', 'simplegbm_model', 'TunedBaggingC']
Number:  30 Model:  ['TunedRForest', 'TunedETCLassifier', 'Tunedmodelxgb', 'simplegbm_model', 'TunedBaggingC']

Finally Selected Model

Selected Random Forest(Tuned) as it gives accuracy of 0.88, 0.008 and better on whole dataset 0.7960.

Models with low bias and low variance.

Model Training Accuracy Testing Accuracy KFold Time Size Complexity
Random Forest(Tuned) 0.9179 0.9223 0.88, 0.00 1m 28 1.41 GB Somewhat Interepretable and Complex
XGboost(Tuned) 0.9301 0.9355 0.91, 0.00 9m 20s 281 MB Somewhat Interepretable and Complex
Stacking (RForest, LGBM) 0.8715 0.8770 (Individual) RF 0.90, 0.88 others, 0.00 31.5s 444MB Somewhat Interepretable and Complex
  • To avoid overfit - KFold Cross validation to check fold train/test score distributions. TRain and Test should not vary much.
  • To avoid bias - stratify sampling and ADASYN Oversampling to balance class variable.

After taking equal samples from original data:

  • RF Trained on Final Data - 0.7960
  • XGB Trained on Final Data - 0.7468
  • Stacked (RF and LGBM) - 0.7362
In [15]:
#Normalizing the features - based on traning 
#Normalizing the features 
#sc_X = StandardScaler() 
#X_train = sc_X.fit_transform(X_train)
#X_test = sc_X.transform(X_test)

from mlens.ensemble import SuperLearner
from sklearn.metrics import accuracy_score
###FINAL DELIVERABLE - RF

ensembleRFT = SuperLearner(scorer = accuracy_score, random_state=2, folds=10, n_jobs=3)

# Build the first layer
ensembleRFT.add([TunedRForest])
# Attach the final meta estimator
ensembleRFT.add_meta(DecTree)
ensembleRFT = model_exec_final(ensembleRFT, 'ensembleRFT', X_train, y_train)
Running Model
Shape of the Train data: (92781, 12)
Whole data Fit:
                                   score-m  score-s   ft-m  ft-s  pt-m  pt-s
layer-1  randomforestclassifier       0.88     0.00  14.16  0.51  0.14  0.01

Training Set Accuracy:
1.0
Test Set Accuracy:
 0.8839455078461804
In [38]:
from mlens.ensemble import SuperLearner
from sklearn.metrics import accuracy_score
###FINAL DELIVERABLE - RF

ensembleRF = SuperLearner(scorer = accuracy_score, random_state=2, folds=10, n_jobs=3)

# Build the first layer
ensembleRF.add([TunedRForest])
# Attach the final meta estimator
ensembleRF.add_meta(DecTree)
ensembleRF = model_exec_final(ensembleRF, 'FinalRFTrained',  X_tr, y_tr)
Running Model
Shape of the Train data: (115861, 12)
Whole data Fit:
                                   score-m  score-s   ft-m  ft-s  pt-m  pt-s
layer-1  randomforestclassifier       0.88     0.00  18.32  0.46  0.19  0.01

Training Set Accuracy:
0.9179788965413177
Test Set Accuracy:
 0.922357302983273
In [17]:
from mlens.ensemble import SuperLearner
from sklearn.metrics import accuracy_score
###FINAL DELIVERABLE - XGB

ensembleXGBT = SuperLearner(scorer = accuracy_score, random_state=2, folds=10, n_jobs=3)

# Build the first layer
ensembleXGBT.add([Tunedmodelxgb])
# Attach the final meta estimator
ensembleXGBT.add_meta(DecTree)
ensembleXGBT = model_exec_final(ensembleXGBT, 'ensembleXGBT', X_train, y_train)
Running Model
Shape of the Train data: (92781, 12)
Whole data Fit:
                          score-m  score-s   ft-m  ft-s  pt-m  pt-s
layer-1  xgbclassifier       0.91     0.00  99.52  4.35  0.67  0.06

Training Set Accuracy:
0.9984264019572973
Test Set Accuracy:
 0.9119244697361614
In [35]:
from mlens.ensemble import SuperLearner
from sklearn.metrics import accuracy_score
###FINAL DELIVERABLE - XGB

ensembleXGB = SuperLearner(scorer = accuracy_score, random_state=2, folds=10, n_jobs=3)

# Build the first layer
ensembleXGB.add([Tunedmodelxgb])
# Attach the final meta estimator
ensembleXGB.add_meta(DecTree)
ensembleXGB = model_exec_final(ensembleXGB, 'FinalXGBTrained',  X_tr, y_tr)
Running Model
Shape of the Train data: (115861, 12)
Whole data Fit:
                          score-m  score-s    ft-m  ft-s  pt-m  pt-s
layer-1  xgbclassifier       0.91     0.00  136.98  5.91  0.92  0.13

Training Set Accuracy:
0.930136558131514
Test Set Accuracy:
 0.9355923435075013
In [28]:
#Stacking - 10k, 10 Fold
!pip install mlens
from mlens.ensemble import SuperLearner
from sklearn.metrics import accuracy_score
###FINAL DELIVERABLE - XGB
ensemble = SuperLearner(scorer = accuracy_score, random_state=2, folds=10, n_jobs=-1)

# Build the first layer
ensemble.add([TunedRForest,simplegbm_model])
# Attach the final meta estimator
ensemble.add_meta(DecTree)
ensemble = model_exec_final(ensemble, 'Finalensemble', X_train, y_train)
Requirement already satisfied: mlens in c:\programdata\anaconda3\lib\site-packages (0.2.3)
Requirement already satisfied: scipy>=0.17 in c:\programdata\anaconda3\lib\site-packages (from mlens) (1.1.0)
Requirement already satisfied: numpy>=1.11 in c:\programdata\anaconda3\lib\site-packages (from mlens) (1.16.2)
Running Model
Shape of the Train data: (92781, 12)
Whole data Fit:
                                   score-m  score-s   ft-m  ft-s  pt-m  pt-s
layer-1  lgbmclassifier               0.90     0.00   9.11  2.13  0.46  0.12
layer-1  randomforestclassifier       0.88     0.00  26.00  7.39  0.18  0.05

Training Set Accuracy:
0.9129886506935687
Test Set Accuracy:
 0.9034747370236248
In [34]:
#Stacking -10 Fold
!pip install mlens
from mlens.ensemble import SuperLearner
from sklearn.metrics import accuracy_score
###FINAL DELIVERABLE - XGB

with open('X_with_pledged_backers_12_chosen.pkl','rb') as f:
    X = pickle.load(f)
    print(X.shape)
    f.close()
with open('y_with_pledged_backers_12_chosen.pkl','rb') as f:
    y = pickle.load(f)
    print(y.shape)
    f.close()

#Splitting the data into Training Set and Test Set
## prepare training and testing dataset
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 2, stratify=y)
X_tr, X_te, y_tr, y_te = train_test_split(X, y, test_size = 0.001, random_state = 2, stratify=y)

#Normalizing the features 
sc_X = StandardScaler() 
X_tr = sc_X.fit_transform(X_tr)

ensemble = SuperLearner(scorer = accuracy_score, random_state=2, folds=3, n_jobs=-1)

# Build the first layer
ensemble.add([TunedRForest, simplegbm_model])
# Attach the final meta estimator
ensemble.add_meta(DecTree)
ensemble = model_exec_final(ensemble, 'Finalensemblewhole', X_tr, y_tr)
Requirement already satisfied: mlens in c:\programdata\anaconda3\lib\site-packages (0.2.3)
Requirement already satisfied: scipy>=0.17 in c:\programdata\anaconda3\lib\site-packages (from mlens) (1.1.0)
Requirement already satisfied: numpy>=1.11 in c:\programdata\anaconda3\lib\site-packages (from mlens) (1.16.2)
(115977, 12)
(115977,)
Running Model
Shape of the Train data: (115861, 12)
Whole data Fit:
                                   score-m  score-s   ft-m  ft-s  pt-m  pt-s
layer-1  lgbmclassifier               0.90     0.00   5.24  0.11  1.23  0.02
layer-1  randomforestclassifier       0.88     0.00  16.44  0.15  0.53  0.01

Training Set Accuracy:
0.8715577542815878
Test Set Accuracy:
 0.8770477668563545

Neurons - Keras Multiclass Classifier with Four Layers

Drawing

In [21]:
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.pipeline import Pipeline
In [22]:
# fix random seed for reproducibility
seed = 2
#np.random.seed(seed)
# define baseline model
def baseline_model():
	# create model
	model = Sequential()
	#First Hidden 
	model.add(Dense(12, input_dim=12, activation='relu'))
	#Second Hidden
	model.add(Dense(8, input_dim=12, activation='relu'))
	#Third Hidden
	model.add(Dense(8, input_dim=8, activation='relu'))
  #4th Hidden
	model.add(Dense(8, input_dim=8, activation='relu'))
	model.add(Dense(3, activation='softmax'))
	# Compile model
	model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
	return model
In [24]:
model = baseline_model()
print("Running NN")

model.fit(X_train, yt_kr, epochs = 1500, batch_size = 150)

scores = model.evaluate(X_test, ytest_kr)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
Running NN
Epoch 1/1500
 - 3s - loss: 0.5979 - acc: 0.7327
Epoch 2/1500
 - 3s - loss: 0.4310 - acc: 0.8100
Epoch 3/1500
 - 3s - loss: 0.3981 - acc: 0.8215
Epoch 4/1500
 - 3s - loss: 0.3852 - acc: 0.8253
Epoch 5/1500
 - 3s - loss: 0.3789 - acc: 0.8273
Epoch 6/1500
 - 3s - loss: 0.3749 - acc: 0.8284
Epoch 7/1500
 - 3s - loss: 0.3720 - acc: 0.8291
Epoch 8/1500
 - 3s - loss: 0.3706 - acc: 0.8298
Epoch 9/1500
 - 3s - loss: 0.3668 - acc: 0.8311
Epoch 10/1500
 - 3s - loss: 0.3648 - acc: 0.8322
Epoch 11/1500
 - 3s - loss: 0.3619 - acc: 0.8340
Epoch 12/1500
 - 3s - loss: 0.3611 - acc: 0.8353
Epoch 13/1500
 - 3s - loss: 0.3595 - acc: 0.8359
Epoch 14/1500
 - 3s - loss: 0.3576 - acc: 0.8366
Epoch 15/1500
 - 3s - loss: 0.3544 - acc: 0.8393
Epoch 16/1500
 - 3s - loss: 0.3551 - acc: 0.8387
Epoch 17/1500
 - 3s - loss: 0.3538 - acc: 0.8397
Epoch 18/1500
 - 3s - loss: 0.3519 - acc: 0.8408
Epoch 19/1500
 - 3s - loss: 0.3514 - acc: 0.8416
Epoch 20/1500
 - 3s - loss: 0.3488 - acc: 0.8430
Epoch 21/1500
 - 3s - loss: 0.3478 - acc: 0.8443
Epoch 22/1500
 - 3s - loss: 0.3448 - acc: 0.8462
Epoch 23/1500
 - 3s - loss: 0.3427 - acc: 0.8484
Epoch 24/1500
 - 3s - loss: 0.3403 - acc: 0.8506
Epoch 25/1500
 - 3s - loss: 0.3343 - acc: 0.8543
Epoch 26/1500
 - 3s - loss: 0.3306 - acc: 0.8568
Epoch 27/1500
 - 3s - loss: 0.3280 - acc: 0.8588
Epoch 28/1500
 - 3s - loss: 0.3248 - acc: 0.8599
Epoch 29/1500
 - 3s - loss: 0.3225 - acc: 0.8621
Epoch 30/1500
 - 3s - loss: 0.3206 - acc: 0.8629
Epoch 31/1500
 - 3s - loss: 0.3189 - acc: 0.8638
Epoch 32/1500
 - 3s - loss: 0.3191 - acc: 0.8642
Epoch 33/1500
 - 3s - loss: 0.3173 - acc: 0.8645
Epoch 34/1500
 - 3s - loss: 0.3166 - acc: 0.8654
Epoch 35/1500
 - 3s - loss: 0.3155 - acc: 0.8654
Epoch 36/1500
 - 3s - loss: 0.3139 - acc: 0.8660
Epoch 37/1500
 - 3s - loss: 0.3141 - acc: 0.8660
Epoch 38/1500
 - 3s - loss: 0.3129 - acc: 0.8663
Epoch 39/1500
 - 3s - loss: 0.3125 - acc: 0.8668
Epoch 40/1500
 - 3s - loss: 0.3122 - acc: 0.8668
Epoch 41/1500
 - 3s - loss: 0.3118 - acc: 0.8668
Epoch 42/1500
 - 3s - loss: 0.3101 - acc: 0.8677
Epoch 43/1500
 - 3s - loss: 0.3110 - acc: 0.8677
Epoch 44/1500
 - 3s - loss: 0.3106 - acc: 0.8673
Epoch 45/1500
 - 3s - loss: 0.3093 - acc: 0.8682
Epoch 46/1500
 - 3s - loss: 0.3101 - acc: 0.8676
Epoch 47/1500
 - 3s - loss: 0.3095 - acc: 0.8678
Epoch 48/1500
 - 3s - loss: 0.3077 - acc: 0.8692
Epoch 49/1500
 - 3s - loss: 0.3072 - acc: 0.8688
Epoch 50/1500
 - 3s - loss: 0.3071 - acc: 0.8688
Epoch 51/1500
 - 3s - loss: 0.3080 - acc: 0.8689
Epoch 52/1500
 - 3s - loss: 0.3077 - acc: 0.8687
Epoch 53/1500
 - 3s - loss: 0.3076 - acc: 0.8689
Epoch 54/1500
 - 3s - loss: 0.3079 - acc: 0.8687
Epoch 55/1500
 - 2s - loss: 0.3085 - acc: 0.8685
Epoch 56/1500
 - 3s - loss: 0.3080 - acc: 0.8686
Epoch 57/1500
 - 3s - loss: 0.3076 - acc: 0.8688
Epoch 58/1500
 - 3s - loss: 0.3061 - acc: 0.8697
Epoch 59/1500
 - 3s - loss: 0.3069 - acc: 0.8692
Epoch 60/1500
 - 3s - loss: 0.3060 - acc: 0.8692
Epoch 61/1500
 - 3s - loss: 0.3054 - acc: 0.8697
Epoch 62/1500
 - 2s - loss: 0.3061 - acc: 0.8695
Epoch 63/1500
 - 3s - loss: 0.3055 - acc: 0.8696
Epoch 64/1500
 - 3s - loss: 0.3053 - acc: 0.8693
Epoch 65/1500
 - 3s - loss: 0.3049 - acc: 0.8703
Epoch 66/1500
 - 2s - loss: 0.3048 - acc: 0.8702
Epoch 67/1500
 - 3s - loss: 0.3037 - acc: 0.8704
Epoch 68/1500
 - 2s - loss: 0.3039 - acc: 0.8706
Epoch 69/1500
 - 3s - loss: 0.3042 - acc: 0.8701
Epoch 70/1500
 - 3s - loss: 0.3026 - acc: 0.8713
Epoch 71/1500
 - 3s - loss: 0.3034 - acc: 0.8703
Epoch 72/1500
 - 3s - loss: 0.3024 - acc: 0.8708
Epoch 73/1500
 - 2s - loss: 0.3020 - acc: 0.8710
Epoch 74/1500
 - 3s - loss: 0.3024 - acc: 0.8709
Epoch 75/1500
 - 3s - loss: 0.3018 - acc: 0.8712
Epoch 76/1500
 - 3s - loss: 0.3020 - acc: 0.8708
Epoch 77/1500
 - 3s - loss: 0.3021 - acc: 0.8710
Epoch 78/1500
 - 3s - loss: 0.3018 - acc: 0.8713
Epoch 79/1500
 - 3s - loss: 0.3015 - acc: 0.8712
Epoch 80/1500
 - 2s - loss: 0.3008 - acc: 0.8717
Epoch 81/1500
 - 3s - loss: 0.3007 - acc: 0.8718
Epoch 82/1500
 - 2s - loss: 0.3019 - acc: 0.8712
Epoch 83/1500
 - 3s - loss: 0.3013 - acc: 0.8711
Epoch 84/1500
 - 3s - loss: 0.3004 - acc: 0.8715
Epoch 85/1500
 - 3s - loss: 0.3002 - acc: 0.8713
Epoch 86/1500
 - 3s - loss: 0.2999 - acc: 0.8721
Epoch 87/1500
 - 3s - loss: 0.3003 - acc: 0.8714
Epoch 88/1500
 - 3s - loss: 0.3002 - acc: 0.8716
Epoch 89/1500
 - 3s - loss: 0.2993 - acc: 0.8722
Epoch 90/1500
 - 3s - loss: 0.2985 - acc: 0.8726
Epoch 91/1500
 - 3s - loss: 0.2993 - acc: 0.8722
Epoch 92/1500
 - 2s - loss: 0.2988 - acc: 0.8719
Epoch 93/1500
 - 2s - loss: 0.2995 - acc: 0.8719
Epoch 94/1500
 - 3s - loss: 0.2991 - acc: 0.8720
Epoch 95/1500
 - 3s - loss: 0.2989 - acc: 0.8721
Epoch 96/1500
 - 3s - loss: 0.2982 - acc: 0.8726
Epoch 97/1500
 - 3s - loss: 0.2979 - acc: 0.8727
Epoch 98/1500
 - 3s - loss: 0.2983 - acc: 0.8726
Epoch 99/1500
 - 3s - loss: 0.2989 - acc: 0.8719
Epoch 100/1500
 - 3s - loss: 0.2982 - acc: 0.8723
Epoch 101/1500
 - 3s - loss: 0.2977 - acc: 0.8725
Epoch 102/1500
 - 3s - loss: 0.2976 - acc: 0.8725
Epoch 103/1500
 - 3s - loss: 0.2976 - acc: 0.8722
Epoch 104/1500
 - 3s - loss: 0.2975 - acc: 0.8724
Epoch 105/1500
 - 3s - loss: 0.2973 - acc: 0.8728
Epoch 106/1500
 - 3s - loss: 0.2975 - acc: 0.8725
Epoch 107/1500
 - 3s - loss: 0.2977 - acc: 0.8727
Epoch 108/1500
 - 3s - loss: 0.2979 - acc: 0.8723
Epoch 109/1500
 - 3s - loss: 0.2971 - acc: 0.8726
Epoch 110/1500
 - 3s - loss: 0.2970 - acc: 0.8732
Epoch 111/1500
 - 3s - loss: 0.2965 - acc: 0.8729
Epoch 112/1500
 - 3s - loss: 0.2972 - acc: 0.8725
Epoch 113/1500
 - 2s - loss: 0.2969 - acc: 0.8730
Epoch 114/1500
 - 3s - loss: 0.2970 - acc: 0.8728
Epoch 115/1500
 - 3s - loss: 0.2966 - acc: 0.8730
Epoch 116/1500
 - 3s - loss: 0.2959 - acc: 0.8732
Epoch 117/1500
 - 3s - loss: 0.2963 - acc: 0.8731
Epoch 118/1500
 - 3s - loss: 0.2963 - acc: 0.8732
Epoch 119/1500
 - 3s - loss: 0.2959 - acc: 0.8733
Epoch 120/1500
 - 3s - loss: 0.2956 - acc: 0.8735
Epoch 121/1500
 - 3s - loss: 0.2957 - acc: 0.8736
Epoch 122/1500
 - 3s - loss: 0.2954 - acc: 0.8737
Epoch 123/1500
 - 3s - loss: 0.2956 - acc: 0.8732
Epoch 124/1500
 - 3s - loss: 0.2953 - acc: 0.8737
Epoch 125/1500
 - 3s - loss: 0.2949 - acc: 0.8738
Epoch 126/1500
 - 3s - loss: 0.2961 - acc: 0.8736
Epoch 127/1500
 - 3s - loss: 0.2957 - acc: 0.8731
Epoch 128/1500
 - 3s - loss: 0.2950 - acc: 0.8736
Epoch 129/1500
 - 3s - loss: 0.2945 - acc: 0.8736
Epoch 130/1500
 - 3s - loss: 0.2940 - acc: 0.8740
Epoch 131/1500
 - 2s - loss: 0.2957 - acc: 0.8736
Epoch 132/1500
 - 3s - loss: 0.2946 - acc: 0.8741
Epoch 133/1500
 - 3s - loss: 0.2946 - acc: 0.8739
Epoch 134/1500
 - 3s - loss: 0.2939 - acc: 0.8743
Epoch 135/1500
 - 3s - loss: 0.2941 - acc: 0.8745
Epoch 136/1500
 - 2s - loss: 0.2938 - acc: 0.8743
Epoch 137/1500
 - 3s - loss: 0.2944 - acc: 0.8743
Epoch 138/1500
 - 3s - loss: 0.2951 - acc: 0.8737
Epoch 139/1500
 - 3s - loss: 0.2932 - acc: 0.8747
Epoch 140/1500
 - 3s - loss: 0.2933 - acc: 0.8747
Epoch 141/1500
 - 3s - loss: 0.2929 - acc: 0.8752
Epoch 142/1500
 - 3s - loss: 0.2942 - acc: 0.8748
Epoch 143/1500
 - 3s - loss: 0.2934 - acc: 0.8747
Epoch 144/1500
 - 2s - loss: 0.2926 - acc: 0.8753
Epoch 145/1500
 - 3s - loss: 0.2931 - acc: 0.8751
Epoch 146/1500
 - 3s - loss: 0.2928 - acc: 0.8750
Epoch 147/1500
 - 3s - loss: 0.2923 - acc: 0.8753
Epoch 148/1500
 - 3s - loss: 0.2934 - acc: 0.8749
Epoch 149/1500
 - 3s - loss: 0.2931 - acc: 0.8752
Epoch 150/1500
 - 3s - loss: 0.2924 - acc: 0.8751
Epoch 151/1500
 - 3s - loss: 0.2924 - acc: 0.8756
Epoch 152/1500
 - 3s - loss: 0.2913 - acc: 0.8760
Epoch 153/1500
 - 3s - loss: 0.2917 - acc: 0.8756
Epoch 154/1500
 - 3s - loss: 0.2929 - acc: 0.8753
Epoch 155/1500
 - 3s - loss: 0.2920 - acc: 0.8754
Epoch 156/1500
 - 3s - loss: 0.2915 - acc: 0.8755
Epoch 157/1500
 - 3s - loss: 0.2912 - acc: 0.8759
Epoch 158/1500
 - 3s - loss: 0.2914 - acc: 0.8758
Epoch 159/1500
 - 3s - loss: 0.2908 - acc: 0.8758
Epoch 160/1500
 - 2s - loss: 0.2915 - acc: 0.8759
Epoch 161/1500
 - 3s - loss: 0.2913 - acc: 0.8759
Epoch 162/1500
 - 3s - loss: 0.2912 - acc: 0.8758
Epoch 163/1500
 - 3s - loss: 0.2905 - acc: 0.8760
Epoch 164/1500
 - 3s - loss: 0.2909 - acc: 0.8760
Epoch 165/1500
 - 3s - loss: 0.2902 - acc: 0.8763
Epoch 166/1500
 - 2s - loss: 0.2904 - acc: 0.8760
Epoch 167/1500
 - 2s - loss: 0.2907 - acc: 0.8760
Epoch 168/1500
 - 2s - loss: 0.2899 - acc: 0.8764
Epoch 169/1500
 - 2s - loss: 0.2900 - acc: 0.8765
Epoch 170/1500
 - 2s - loss: 0.2892 - acc: 0.8765
Epoch 171/1500
 - 3s - loss: 0.2895 - acc: 0.8765
Epoch 172/1500
 - 2s - loss: 0.2904 - acc: 0.8764
Epoch 173/1500
 - 2s - loss: 0.2893 - acc: 0.8763
Epoch 174/1500
 - 2s - loss: 0.2894 - acc: 0.8767
Epoch 175/1500
 - 3s - loss: 0.2895 - acc: 0.8768
Epoch 176/1500
 - 2s - loss: 0.2885 - acc: 0.8767
Epoch 177/1500
 - 3s - loss: 0.2888 - acc: 0.8767
Epoch 178/1500
 - 2s - loss: 0.2890 - acc: 0.8765
Epoch 179/1500
 - 2s - loss: 0.2888 - acc: 0.8769
Epoch 180/1500
 - 2s - loss: 0.2889 - acc: 0.8766
Epoch 181/1500
 - 2s - loss: 0.2883 - acc: 0.8771
Epoch 182/1500
 - 2s - loss: 0.2892 - acc: 0.8768
Epoch 183/1500
 - 2s - loss: 0.2889 - acc: 0.8767
Epoch 184/1500
 - 3s - loss: 0.2879 - acc: 0.8773
Epoch 185/1500
 - 2s - loss: 0.2874 - acc: 0.8776
Epoch 186/1500
 - 2s - loss: 0.2874 - acc: 0.8772
Epoch 187/1500
 - 2s - loss: 0.2872 - acc: 0.8771
Epoch 188/1500
 - 2s - loss: 0.2862 - acc: 0.8779
Epoch 189/1500
 - 2s - loss: 0.2874 - acc: 0.8775
Epoch 190/1500
 - 3s - loss: 0.2876 - acc: 0.8770
Epoch 191/1500
 - 2s - loss: 0.2879 - acc: 0.8769
Epoch 192/1500
 - 2s - loss: 0.2874 - acc: 0.8772
Epoch 193/1500
 - 2s - loss: 0.2874 - acc: 0.8772
Epoch 194/1500
 - 2s - loss: 0.2871 - acc: 0.8775
Epoch 195/1500
 - 3s - loss: 0.2862 - acc: 0.8775
Epoch 196/1500
 - 3s - loss: 0.2866 - acc: 0.8775
Epoch 197/1500
 - 3s - loss: 0.2867 - acc: 0.8775
Epoch 198/1500
 - 3s - loss: 0.2857 - acc: 0.8783
Epoch 199/1500
 - 3s - loss: 0.2858 - acc: 0.8780
Epoch 200/1500
 - 3s - loss: 0.2854 - acc: 0.8780
Epoch 201/1500
 - 2s - loss: 0.2850 - acc: 0.8783
Epoch 202/1500
 - 2s - loss: 0.2853 - acc: 0.8786
Epoch 203/1500
 - 3s - loss: 0.2850 - acc: 0.8782
Epoch 204/1500
 - 2s - loss: 0.2848 - acc: 0.8784
Epoch 205/1500
 - 2s - loss: 0.2850 - acc: 0.8782
Epoch 206/1500
 - 2s - loss: 0.2847 - acc: 0.8786
Epoch 207/1500
 - 2s - loss: 0.2849 - acc: 0.8786
Epoch 208/1500
 - 2s - loss: 0.2847 - acc: 0.8785
Epoch 209/1500
 - 3s - loss: 0.2846 - acc: 0.8789
Epoch 210/1500
 - 2s - loss: 0.2841 - acc: 0.8790
Epoch 211/1500
 - 2s - loss: 0.2857 - acc: 0.8788
Epoch 212/1500
 - 2s - loss: 0.2839 - acc: 0.8793
Epoch 213/1500
 - 2s - loss: 0.2830 - acc: 0.8799
Epoch 214/1500
 - 2s - loss: 0.2824 - acc: 0.8796
Epoch 215/1500
 - 2s - loss: 0.2820 - acc: 0.8800
Epoch 216/1500
 - 3s - loss: 0.2810 - acc: 0.8802
Epoch 217/1500
 - 2s - loss: 0.2804 - acc: 0.8805
Epoch 218/1500
 - 2s - loss: 0.2796 - acc: 0.8803
Epoch 219/1500
 - 2s - loss: 0.2783 - acc: 0.8814
Epoch 220/1500
 - 3s - loss: 0.2776 - acc: 0.8819
Epoch 221/1500
 - 2s - loss: 0.2783 - acc: 0.8814
Epoch 222/1500
 - 3s - loss: 0.2772 - acc: 0.8821
Epoch 223/1500
 - 2s - loss: 0.2774 - acc: 0.8822
Epoch 224/1500
 - 2s - loss: 0.2769 - acc: 0.8822
Epoch 225/1500
 - 2s - loss: 0.2765 - acc: 0.8825
Epoch 226/1500
 - 2s - loss: 0.2767 - acc: 0.8824
Epoch 227/1500
 - 2s - loss: 0.2765 - acc: 0.8823
Epoch 228/1500
 - 3s - loss: 0.2753 - acc: 0.8831
Epoch 229/1500
 - 3s - loss: 0.2752 - acc: 0.8828
Epoch 230/1500
 - 2s - loss: 0.2751 - acc: 0.8832
Epoch 231/1500
 - 2s - loss: 0.2743 - acc: 0.8832
Epoch 232/1500
 - 2s - loss: 0.2743 - acc: 0.8835
Epoch 233/1500
 - 2s - loss: 0.2741 - acc: 0.8835
Epoch 234/1500
 - 2s - loss: 0.2742 - acc: 0.8837
Epoch 235/1500
 - 3s - loss: 0.2742 - acc: 0.8835
Epoch 236/1500
 - 2s - loss: 0.2738 - acc: 0.8839
Epoch 237/1500
 - 2s - loss: 0.2740 - acc: 0.8836
Epoch 238/1500
 - 2s - loss: 0.2740 - acc: 0.8838
Epoch 239/1500
 - 2s - loss: 0.2733 - acc: 0.8840
Epoch 240/1500
 - 2s - loss: 0.2744 - acc: 0.8839
Epoch 241/1500
 - 3s - loss: 0.2735 - acc: 0.8837
Epoch 242/1500
 - 2s - loss: 0.2736 - acc: 0.8839
Epoch 243/1500
 - 3s - loss: 0.2730 - acc: 0.8837
Epoch 244/1500
 - 3s - loss: 0.2739 - acc: 0.8837
Epoch 245/1500
 - 2s - loss: 0.2719 - acc: 0.8842
Epoch 246/1500
 - 2s - loss: 0.2713 - acc: 0.8850
Epoch 247/1500
 - 2s - loss: 0.2707 - acc: 0.8852
Epoch 248/1500
 - 3s - loss: 0.2699 - acc: 0.8857
Epoch 249/1500
 - 2s - loss: 0.2688 - acc: 0.8862
Epoch 250/1500
 - 2s - loss: 0.2690 - acc: 0.8857
Epoch 251/1500
 - 2s - loss: 0.2694 - acc: 0.8862
Epoch 252/1500
 - 2s - loss: 0.2675 - acc: 0.8866
Epoch 253/1500
 - 2s - loss: 0.2675 - acc: 0.8871
Epoch 254/1500
 - 3s - loss: 0.2668 - acc: 0.8869
Epoch 255/1500
 - 2s - loss: 0.2669 - acc: 0.8866
Epoch 256/1500
 - 2s - loss: 0.2659 - acc: 0.8875
Epoch 257/1500
 - 2s - loss: 0.2663 - acc: 0.8875
Epoch 258/1500
 - 2s - loss: 0.2660 - acc: 0.8873
Epoch 259/1500
 - 3s - loss: 0.2647 - acc: 0.8875
Epoch 260/1500
 - 3s - loss: 0.2647 - acc: 0.8872
Epoch 261/1500
 - 3s - loss: 0.2642 - acc: 0.8879
Epoch 262/1500
 - 2s - loss: 0.2624 - acc: 0.8888
Epoch 263/1500
 - 2s - loss: 0.2625 - acc: 0.8884
Epoch 264/1500
 - 2s - loss: 0.2624 - acc: 0.8886
Epoch 265/1500
 - 2s - loss: 0.2623 - acc: 0.8884
Epoch 266/1500
 - 2s - loss: 0.2621 - acc: 0.8886
Epoch 267/1500
 - 3s - loss: 0.2611 - acc: 0.8887
Epoch 268/1500
 - 2s - loss: 0.2616 - acc: 0.8884
Epoch 269/1500
 - 2s - loss: 0.2609 - acc: 0.8892
Epoch 270/1500
 - 2s - loss: 0.2604 - acc: 0.8892
Epoch 271/1500
 - 3s - loss: 0.2613 - acc: 0.8886
Epoch 272/1500
 - 3s - loss: 0.2606 - acc: 0.8889
Epoch 273/1500
 - 3s - loss: 0.2598 - acc: 0.8893
Epoch 274/1500
 - 3s - loss: 0.2603 - acc: 0.8891
Epoch 275/1500
 - 3s - loss: 0.2600 - acc: 0.8895
Epoch 276/1500
 - 2s - loss: 0.2598 - acc: 0.8891
Epoch 277/1500
 - 2s - loss: 0.2599 - acc: 0.8892
Epoch 278/1500
 - 2s - loss: 0.2598 - acc: 0.8889
Epoch 279/1500
 - 3s - loss: 0.2589 - acc: 0.8896
Epoch 280/1500
 - 2s - loss: 0.2588 - acc: 0.8898
Epoch 281/1500
 - 2s - loss: 0.2597 - acc: 0.8892
Epoch 282/1500
 - 2s - loss: 0.2590 - acc: 0.8896
Epoch 283/1500
 - 2s - loss: 0.2590 - acc: 0.8890
Epoch 284/1500
 - 2s - loss: 0.2581 - acc: 0.8901
Epoch 285/1500
 - 3s - loss: 0.2590 - acc: 0.8897
Epoch 286/1500
 - 3s - loss: 0.2584 - acc: 0.8899
Epoch 287/1500
 - 2s - loss: 0.2574 - acc: 0.8902
Epoch 288/1500
 - 2s - loss: 0.2579 - acc: 0.8900
Epoch 289/1500
 - 2s - loss: 0.2570 - acc: 0.8904
Epoch 290/1500
 - 2s - loss: 0.2579 - acc: 0.8898
Epoch 291/1500
 - 2s - loss: 0.2569 - acc: 0.8900
Epoch 292/1500
 - 3s - loss: 0.2566 - acc: 0.8908
Epoch 293/1500
 - 2s - loss: 0.2578 - acc: 0.8896
Epoch 294/1500
 - 2s - loss: 0.2578 - acc: 0.8902
Epoch 295/1500
 - 2s - loss: 0.2568 - acc: 0.8905
Epoch 296/1500
 - 2s - loss: 0.2574 - acc: 0.8902
Epoch 297/1500
 - 2s - loss: 0.2563 - acc: 0.8903
Epoch 298/1500
 - 2s - loss: 0.2566 - acc: 0.8904
Epoch 299/1500
 - 3s - loss: 0.2566 - acc: 0.8905
Epoch 300/1500
 - 2s - loss: 0.2567 - acc: 0.8906
Epoch 301/1500
 - 3s - loss: 0.2572 - acc: 0.8900
Epoch 302/1500
 - 2s - loss: 0.2564 - acc: 0.8902
Epoch 303/1500
 - 2s - loss: 0.2568 - acc: 0.8903
Epoch 304/1500
 - 2s - loss: 0.2553 - acc: 0.8912
Epoch 305/1500
 - 3s - loss: 0.2556 - acc: 0.8908
Epoch 306/1500
 - 2s - loss: 0.2560 - acc: 0.8906
Epoch 307/1500
 - 2s - loss: 0.2559 - acc: 0.8907
Epoch 308/1500
 - 2s - loss: 0.2563 - acc: 0.8904
Epoch 309/1500
 - 2s - loss: 0.2558 - acc: 0.8904
Epoch 310/1500
 - 2s - loss: 0.2559 - acc: 0.8904
Epoch 311/1500
 - 3s - loss: 0.2546 - acc: 0.8911
Epoch 312/1500
 - 3s - loss: 0.2558 - acc: 0.8904
Epoch 313/1500
 - 2s - loss: 0.2558 - acc: 0.8905
Epoch 314/1500
 - 2s - loss: 0.2557 - acc: 0.8906
Epoch 315/1500
 - 2s - loss: 0.2543 - acc: 0.8909
Epoch 316/1500
 - 2s - loss: 0.2551 - acc: 0.8909
Epoch 317/1500
 - 2s - loss: 0.2546 - acc: 0.8911
Epoch 318/1500
 - 3s - loss: 0.2558 - acc: 0.8905
Epoch 319/1500
 - 2s - loss: 0.2552 - acc: 0.8908
Epoch 320/1500
 - 2s - loss: 0.2556 - acc: 0.8905
Epoch 321/1500
 - 3s - loss: 0.2547 - acc: 0.8909
Epoch 322/1500
 - 3s - loss: 0.2547 - acc: 0.8909
Epoch 323/1500
 - 3s - loss: 0.2552 - acc: 0.8906
Epoch 324/1500
 - 3s - loss: 0.2546 - acc: 0.8910
Epoch 325/1500
 - 2s - loss: 0.2550 - acc: 0.8910
Epoch 326/1500
 - 2s - loss: 0.2548 - acc: 0.8907
Epoch 327/1500
 - 2s - loss: 0.2558 - acc: 0.8905
Epoch 328/1500
 - 2s - loss: 0.2550 - acc: 0.8909
Epoch 329/1500
 - 2s - loss: 0.2541 - acc: 0.8908
Epoch 330/1500
 - 2s - loss: 0.2540 - acc: 0.8916
Epoch 331/1500
 - 3s - loss: 0.2540 - acc: 0.8914
Epoch 332/1500
 - 2s - loss: 0.2549 - acc: 0.8906
Epoch 333/1500
 - 2s - loss: 0.2544 - acc: 0.8910
Epoch 334/1500
 - 2s - loss: 0.2541 - acc: 0.8910
Epoch 335/1500
 - 2s - loss: 0.2542 - acc: 0.8909
Epoch 336/1500
 - 2s - loss: 0.2544 - acc: 0.8909
Epoch 337/1500
 - 3s - loss: 0.2546 - acc: 0.8909
Epoch 338/1500
 - 2s - loss: 0.2536 - acc: 0.8918
Epoch 339/1500
 - 2s - loss: 0.2544 - acc: 0.8910
Epoch 340/1500
 - 2s - loss: 0.2542 - acc: 0.8911
Epoch 341/1500
 - 2s - loss: 0.2539 - acc: 0.8912
Epoch 342/1500
 - 2s - loss: 0.2542 - acc: 0.8911
Epoch 343/1500
 - 2s - loss: 0.2540 - acc: 0.8914
Epoch 344/1500
 - 3s - loss: 0.2539 - acc: 0.8912
Epoch 345/1500
 - 2s - loss: 0.2540 - acc: 0.8908
Epoch 346/1500
 - 3s - loss: 0.2534 - acc: 0.8912
Epoch 347/1500
 - 3s - loss: 0.2535 - acc: 0.8914
Epoch 348/1500
 - 3s - loss: 0.2529 - acc: 0.8916
Epoch 349/1500
 - 3s - loss: 0.2537 - acc: 0.8911
Epoch 350/1500
 - 3s - loss: 0.2536 - acc: 0.8914
Epoch 351/1500
 - 3s - loss: 0.2537 - acc: 0.8910
Epoch 352/1500
 - 2s - loss: 0.2544 - acc: 0.8913
Epoch 353/1500
 - 3s - loss: 0.2532 - acc: 0.8918
Epoch 354/1500
 - 2s - loss: 0.2540 - acc: 0.8916
Epoch 355/1500
 - 2s - loss: 0.2535 - acc: 0.8913
Epoch 356/1500
 - 3s - loss: 0.2533 - acc: 0.8912
Epoch 357/1500
 - 2s - loss: 0.2524 - acc: 0.8919
Epoch 358/1500
 - 2s - loss: 0.2530 - acc: 0.8917
Epoch 359/1500
 - 2s - loss: 0.2534 - acc: 0.8913
Epoch 360/1500
 - 2s - loss: 0.2535 - acc: 0.8913
Epoch 361/1500
 - 2s - loss: 0.2527 - acc: 0.8917
Epoch 362/1500
 - 3s - loss: 0.2540 - acc: 0.8907
Epoch 363/1500
 - 2s - loss: 0.2534 - acc: 0.8911
Epoch 364/1500
 - 3s - loss: 0.2520 - acc: 0.8918
Epoch 365/1500
 - 2s - loss: 0.2528 - acc: 0.8915
Epoch 366/1500
 - 2s - loss: 0.2526 - acc: 0.8912
Epoch 367/1500
 - 2s - loss: 0.2523 - acc: 0.8918
Epoch 368/1500
 - 3s - loss: 0.2519 - acc: 0.8918
Epoch 369/1500
 - 3s - loss: 0.2516 - acc: 0.8921
Epoch 370/1500
 - 2s - loss: 0.2519 - acc: 0.8918
Epoch 371/1500
 - 2s - loss: 0.2520 - acc: 0.8919
Epoch 372/1500
 - 2s - loss: 0.2509 - acc: 0.8922
Epoch 373/1500
 - 3s - loss: 0.2515 - acc: 0.8920
Epoch 374/1500
 - 2s - loss: 0.2517 - acc: 0.8917
Epoch 375/1500
 - 3s - loss: 0.2512 - acc: 0.8918
Epoch 376/1500
 - 2s - loss: 0.2515 - acc: 0.8918
Epoch 377/1500
 - 2s - loss: 0.2518 - acc: 0.8920
Epoch 378/1500
 - 2s - loss: 0.2513 - acc: 0.8926
Epoch 379/1500
 - 2s - loss: 0.2514 - acc: 0.8922
Epoch 380/1500
 - 2s - loss: 0.2512 - acc: 0.8921
Epoch 381/1500
 - 3s - loss: 0.2507 - acc: 0.8923
Epoch 382/1500
 - 3s - loss: 0.2514 - acc: 0.8923
Epoch 383/1500
 - 2s - loss: 0.2506 - acc: 0.8923
Epoch 384/1500
 - 2s - loss: 0.2506 - acc: 0.8923
Epoch 385/1500
 - 2s - loss: 0.2513 - acc: 0.8924
Epoch 386/1500
 - 2s - loss: 0.2505 - acc: 0.8925
Epoch 387/1500
 - 2s - loss: 0.2502 - acc: 0.8926
Epoch 388/1500
 - 3s - loss: 0.2517 - acc: 0.8919
Epoch 389/1500
 - 2s - loss: 0.2501 - acc: 0.8925
Epoch 390/1500
 - 2s - loss: 0.2498 - acc: 0.8927
Epoch 391/1500
 - 2s - loss: 0.2501 - acc: 0.8926
Epoch 392/1500
 - 2s - loss: 0.2497 - acc: 0.8924
Epoch 393/1500
 - 2s - loss: 0.2496 - acc: 0.8928
Epoch 394/1500
 - 3s - loss: 0.2494 - acc: 0.8929
Epoch 395/1500
 - 3s - loss: 0.2506 - acc: 0.8924
Epoch 396/1500
 - 2s - loss: 0.2496 - acc: 0.8928
Epoch 397/1500
 - 3s - loss: 0.2503 - acc: 0.8925
Epoch 398/1500
 - 2s - loss: 0.2489 - acc: 0.8930
Epoch 399/1500
 - 2s - loss: 0.2488 - acc: 0.8930
Epoch 400/1500
 - 2s - loss: 0.2494 - acc: 0.8925
Epoch 401/1500
 - 3s - loss: 0.2492 - acc: 0.8932
Epoch 402/1500
 - 2s - loss: 0.2494 - acc: 0.8933
Epoch 403/1500
 - 2s - loss: 0.2483 - acc: 0.8931
Epoch 404/1500
 - 2s - loss: 0.2483 - acc: 0.8932
Epoch 405/1500
 - 2s - loss: 0.2487 - acc: 0.8935
Epoch 406/1500
 - 2s - loss: 0.2487 - acc: 0.8931
Epoch 407/1500
 - 3s - loss: 0.2490 - acc: 0.8929
Epoch 408/1500
 - 3s - loss: 0.2488 - acc: 0.8931
Epoch 409/1500
 - 2s - loss: 0.2484 - acc: 0.8931
Epoch 410/1500
 - 2s - loss: 0.2476 - acc: 0.8936
Epoch 411/1500
 - 2s - loss: 0.2482 - acc: 0.8934
Epoch 412/1500
 - 2s - loss: 0.2492 - acc: 0.8931
Epoch 413/1500
 - 2s - loss: 0.2476 - acc: 0.8937
Epoch 414/1500
 - 3s - loss: 0.2481 - acc: 0.8933
Epoch 415/1500
 - 2s - loss: 0.2490 - acc: 0.8928
Epoch 416/1500
 - 3s - loss: 0.2476 - acc: 0.8934
Epoch 417/1500
 - 3s - loss: 0.2477 - acc: 0.8932
Epoch 418/1500
 - 3s - loss: 0.2470 - acc: 0.8938
Epoch 419/1500
 - 3s - loss: 0.2472 - acc: 0.8936
Epoch 420/1500
 - 3s - loss: 0.2477 - acc: 0.8935
Epoch 421/1500
 - 2s - loss: 0.2480 - acc: 0.8935
Epoch 422/1500
 - 3s - loss: 0.2470 - acc: 0.8936
Epoch 423/1500
 - 3s - loss: 0.2474 - acc: 0.8932
Epoch 424/1500
 - 3s - loss: 0.2479 - acc: 0.8935
Epoch 425/1500
 - 3s - loss: 0.2472 - acc: 0.8938
Epoch 426/1500
 - 3s - loss: 0.2468 - acc: 0.8940
Epoch 427/1500
 - 3s - loss: 0.2469 - acc: 0.8942
Epoch 428/1500
 - 2s - loss: 0.2472 - acc: 0.8934
Epoch 429/1500
 - 2s - loss: 0.2466 - acc: 0.8940
Epoch 430/1500
 - 2s - loss: 0.2470 - acc: 0.8934
Epoch 431/1500
 - 2s - loss: 0.2470 - acc: 0.8938
Epoch 432/1500
 - 3s - loss: 0.2471 - acc: 0.8938
Epoch 433/1500
 - 2s - loss: 0.2483 - acc: 0.8933
Epoch 434/1500
 - 2s - loss: 0.2479 - acc: 0.8932
Epoch 435/1500
 - 2s - loss: 0.2475 - acc: 0.8936
Epoch 436/1500
 - 2s - loss: 0.2471 - acc: 0.8940
Epoch 437/1500
 - 2s - loss: 0.2469 - acc: 0.8939
Epoch 438/1500
 - 2s - loss: 0.2462 - acc: 0.8944
Epoch 439/1500
 - 3s - loss: 0.2464 - acc: 0.8940
Epoch 440/1500
 - 2s - loss: 0.2467 - acc: 0.8939
Epoch 441/1500
 - 2s - loss: 0.2475 - acc: 0.8940
Epoch 442/1500
 - 2s - loss: 0.2471 - acc: 0.8936
Epoch 443/1500
 - 2s - loss: 0.2470 - acc: 0.8938
Epoch 444/1500
 - 2s - loss: 0.2469 - acc: 0.8938
Epoch 445/1500
 - 3s - loss: 0.2465 - acc: 0.8941
Epoch 446/1500
 - 3s - loss: 0.2466 - acc: 0.8939
Epoch 447/1500
 - 3s - loss: 0.2466 - acc: 0.8940
Epoch 448/1500
 - 2s - loss: 0.2460 - acc: 0.8945
Epoch 449/1500
 - 2s - loss: 0.2463 - acc: 0.8943
Epoch 450/1500
 - 2s - loss: 0.2462 - acc: 0.8947
Epoch 451/1500
 - 2s - loss: 0.2474 - acc: 0.8937
Epoch 452/1500
 - 3s - loss: 0.2466 - acc: 0.8941
Epoch 453/1500
 - 2s - loss: 0.2474 - acc: 0.8940
Epoch 454/1500
 - 2s - loss: 0.2457 - acc: 0.8944
Epoch 455/1500
 - 3s - loss: 0.2475 - acc: 0.8936
Epoch 456/1500
 - 3s - loss: 0.2457 - acc: 0.8944
Epoch 457/1500
 - 3s - loss: 0.2459 - acc: 0.8945
Epoch 458/1500
 - 3s - loss: 0.2456 - acc: 0.8944
Epoch 459/1500
 - 2s - loss: 0.2471 - acc: 0.8939
Epoch 460/1500
 - 2s - loss: 0.2468 - acc: 0.8942
Epoch 461/1500
 - 2s - loss: 0.2459 - acc: 0.8941
Epoch 462/1500
 - 2s - loss: 0.2460 - acc: 0.8945
Epoch 463/1500
 - 2s - loss: 0.2467 - acc: 0.8942
Epoch 464/1500
 - 3s - loss: 0.2460 - acc: 0.8943
Epoch 465/1500
 - 2s - loss: 0.2464 - acc: 0.8942
Epoch 466/1500
 - 2s - loss: 0.2460 - acc: 0.8944
Epoch 467/1500
 - 2s - loss: 0.2457 - acc: 0.8945
Epoch 468/1500
 - 2s - loss: 0.2457 - acc: 0.8945
Epoch 469/1500
 - 2s - loss: 0.2459 - acc: 0.8947
Epoch 470/1500
 - 2s - loss: 0.2458 - acc: 0.8947
Epoch 471/1500
 - 3s - loss: 0.2462 - acc: 0.8945
Epoch 472/1500
 - 2s - loss: 0.2463 - acc: 0.8945
Epoch 473/1500
 - 2s - loss: 0.2466 - acc: 0.8943
Epoch 474/1500
 - 2s - loss: 0.2449 - acc: 0.8946
Epoch 475/1500
 - 2s - loss: 0.2456 - acc: 0.8943
Epoch 476/1500
 - 2s - loss: 0.2461 - acc: 0.8947
Epoch 477/1500
 - 3s - loss: 0.2464 - acc: 0.8941
Epoch 478/1500
 - 2s - loss: 0.2465 - acc: 0.8944
Epoch 479/1500
 - 2s - loss: 0.2452 - acc: 0.8947
Epoch 480/1500
 - 2s - loss: 0.2453 - acc: 0.8948
Epoch 481/1500
 - 2s - loss: 0.2458 - acc: 0.8944
Epoch 482/1500
 - 2s - loss: 0.2456 - acc: 0.8946
Epoch 483/1500
 - 2s - loss: 0.2453 - acc: 0.8953
Epoch 484/1500
 - 3s - loss: 0.2450 - acc: 0.8950
Epoch 485/1500
 - 2s - loss: 0.2449 - acc: 0.8950
Epoch 486/1500
 - 2s - loss: 0.2460 - acc: 0.8947
Epoch 487/1500
 - 2s - loss: 0.2446 - acc: 0.8950
Epoch 488/1500
 - 2s - loss: 0.2456 - acc: 0.8949
Epoch 489/1500
 - 2s - loss: 0.2448 - acc: 0.8952
Epoch 490/1500
 - 3s - loss: 0.2444 - acc: 0.8952
Epoch 491/1500
 - 2s - loss: 0.2451 - acc: 0.8949
Epoch 492/1500
 - 3s - loss: 0.2456 - acc: 0.8948
Epoch 493/1500
 - 2s - loss: 0.2450 - acc: 0.8951
Epoch 494/1500
 - 2s - loss: 0.2446 - acc: 0.8947
Epoch 495/1500
 - 3s - loss: 0.2451 - acc: 0.8948
Epoch 496/1500
 - 3s - loss: 0.2442 - acc: 0.8954
Epoch 497/1500
 - 2s - loss: 0.2446 - acc: 0.8949
Epoch 498/1500
 - 3s - loss: 0.2452 - acc: 0.8948
Epoch 499/1500
 - 3s - loss: 0.2457 - acc: 0.8947
Epoch 500/1500
 - 3s - loss: 0.2447 - acc: 0.8952
Epoch 501/1500
 - 3s - loss: 0.2447 - acc: 0.8949
Epoch 502/1500
 - 3s - loss: 0.2451 - acc: 0.8949
Epoch 503/1500
 - 2s - loss: 0.2448 - acc: 0.8949
Epoch 504/1500
 - 2s - loss: 0.2448 - acc: 0.8950
Epoch 505/1500
 - 2s - loss: 0.2447 - acc: 0.8952
Epoch 506/1500
 - 2s - loss: 0.2455 - acc: 0.8945
Epoch 507/1500
 - 2s - loss: 0.2446 - acc: 0.8949
Epoch 508/1500
 - 2s - loss: 0.2447 - acc: 0.8947
Epoch 509/1500
 - 3s - loss: 0.2451 - acc: 0.8950
Epoch 510/1500
 - 2s - loss: 0.2442 - acc: 0.8951
Epoch 511/1500
 - 3s - loss: 0.2447 - acc: 0.8949
Epoch 512/1500
 - 3s - loss: 0.2446 - acc: 0.8950
Epoch 513/1500
 - 3s - loss: 0.2452 - acc: 0.8949
Epoch 514/1500
 - 3s - loss: 0.2445 - acc: 0.8950
Epoch 515/1500
 - 3s - loss: 0.2443 - acc: 0.8954
Epoch 516/1500
 - 2s - loss: 0.2451 - acc: 0.8949
Epoch 517/1500
 - 3s - loss: 0.2438 - acc: 0.8954
Epoch 518/1500
 - 2s - loss: 0.2445 - acc: 0.8948
Epoch 519/1500
 - 3s - loss: 0.2446 - acc: 0.8949
Epoch 520/1500
 - 3s - loss: 0.2439 - acc: 0.8953
Epoch 521/1500
 - 3s - loss: 0.2443 - acc: 0.8953
Epoch 522/1500
 - 3s - loss: 0.2442 - acc: 0.8949
Epoch 523/1500
 - 3s - loss: 0.2439 - acc: 0.8957
Epoch 524/1500
 - 3s - loss: 0.2437 - acc: 0.8950
Epoch 525/1500
 - 3s - loss: 0.2442 - acc: 0.8952
Epoch 526/1500
 - 3s - loss: 0.2445 - acc: 0.8954
Epoch 527/1500
 - 3s - loss: 0.2430 - acc: 0.8955
Epoch 528/1500
 - 2s - loss: 0.2438 - acc: 0.8957
Epoch 529/1500
 - 2s - loss: 0.2437 - acc: 0.8954
Epoch 530/1500
 - 3s - loss: 0.2433 - acc: 0.8954
Epoch 531/1500
 - 3s - loss: 0.2442 - acc: 0.8950
Epoch 532/1500
 - 2s - loss: 0.2445 - acc: 0.8949
Epoch 533/1500
 - 2s - loss: 0.2431 - acc: 0.8954
Epoch 534/1500
 - 3s - loss: 0.2440 - acc: 0.8953
Epoch 535/1500
 - 2s - loss: 0.2429 - acc: 0.8954
Epoch 536/1500
 - 2s - loss: 0.2429 - acc: 0.8956
Epoch 537/1500
 - 2s - loss: 0.2438 - acc: 0.8955
Epoch 538/1500
 - 3s - loss: 0.2432 - acc: 0.8957
Epoch 539/1500
 - 3s - loss: 0.2439 - acc: 0.8953
Epoch 540/1500
 - 3s - loss: 0.2443 - acc: 0.8950
Epoch 541/1500
 - 2s - loss: 0.2441 - acc: 0.8951
Epoch 542/1500
 - 2s - loss: 0.2435 - acc: 0.8953
Epoch 543/1500
 - 3s - loss: 0.2444 - acc: 0.8948
Epoch 544/1500
 - 2s - loss: 0.2435 - acc: 0.8951
Epoch 545/1500
 - 2s - loss: 0.2440 - acc: 0.8951
Epoch 546/1500
 - 3s - loss: 0.2434 - acc: 0.8954
Epoch 547/1500
 - 2s - loss: 0.2434 - acc: 0.8952
Epoch 548/1500
 - 2s - loss: 0.2441 - acc: 0.8953
Epoch 549/1500
 - 2s - loss: 0.2432 - acc: 0.8955
Epoch 550/1500
 - 2s - loss: 0.2423 - acc: 0.8961
Epoch 551/1500
 - 2s - loss: 0.2442 - acc: 0.8951
Epoch 552/1500
 - 2s - loss: 0.2434 - acc: 0.8955
Epoch 553/1500
 - 3s - loss: 0.2430 - acc: 0.8955
Epoch 554/1500
 - 2s - loss: 0.2425 - acc: 0.8958
Epoch 555/1500
 - 2s - loss: 0.2429 - acc: 0.8959
Epoch 556/1500
 - 3s - loss: 0.2429 - acc: 0.8958
Epoch 557/1500
 - 3s - loss: 0.2428 - acc: 0.8957
Epoch 558/1500
 - 3s - loss: 0.2435 - acc: 0.8955
Epoch 559/1500
 - 3s - loss: 0.2425 - acc: 0.8955
Epoch 560/1500
 - 2s - loss: 0.2432 - acc: 0.8956
Epoch 561/1500
 - 2s - loss: 0.2439 - acc: 0.8948
Epoch 562/1500
 - 2s - loss: 0.2436 - acc: 0.8954
Epoch 563/1500
 - 2s - loss: 0.2427 - acc: 0.8957
Epoch 564/1500
 - 2s - loss: 0.2430 - acc: 0.8954
Epoch 565/1500
 - 3s - loss: 0.2425 - acc: 0.8955
Epoch 566/1500
 - 3s - loss: 0.2426 - acc: 0.8958
Epoch 567/1500
 - 2s - loss: 0.2423 - acc: 0.8958
Epoch 568/1500
 - 2s - loss: 0.2421 - acc: 0.8962
Epoch 569/1500
 - 3s - loss: 0.2426 - acc: 0.8957
Epoch 570/1500
 - 3s - loss: 0.2438 - acc: 0.8953
Epoch 571/1500
 - 2s - loss: 0.2427 - acc: 0.8958
Epoch 572/1500
 - 3s - loss: 0.2427 - acc: 0.8956
Epoch 573/1500
 - 3s - loss: 0.2422 - acc: 0.8959
Epoch 574/1500
 - 3s - loss: 0.2433 - acc: 0.8954
Epoch 575/1500
 - 3s - loss: 0.2431 - acc: 0.8955
Epoch 576/1500
 - 3s - loss: 0.2427 - acc: 0.8960
Epoch 577/1500
 - 3s - loss: 0.2432 - acc: 0.8953
Epoch 578/1500
 - 3s - loss: 0.2426 - acc: 0.8958
Epoch 579/1500
 - 2s - loss: 0.2434 - acc: 0.8954
Epoch 580/1500
 - 2s - loss: 0.2422 - acc: 0.8962
Epoch 581/1500
 - 2s - loss: 0.2435 - acc: 0.8952
Epoch 582/1500
 - 2s - loss: 0.2425 - acc: 0.8957
Epoch 583/1500
 - 3s - loss: 0.2424 - acc: 0.8957
Epoch 584/1500
 - 3s - loss: 0.2424 - acc: 0.8958
Epoch 585/1500
 - 2s - loss: 0.2425 - acc: 0.8956
Epoch 586/1500
 - 3s - loss: 0.2420 - acc: 0.8959
Epoch 587/1500
 - 2s - loss: 0.2425 - acc: 0.8960
Epoch 588/1500
 - 2s - loss: 0.2410 - acc: 0.8967
Epoch 589/1500
 - 2s - loss: 0.2420 - acc: 0.8961
Epoch 590/1500
 - 2s - loss: 0.2416 - acc: 0.8957
Epoch 591/1500
 - 3s - loss: 0.2429 - acc: 0.8953
Epoch 592/1500
 - 2s - loss: 0.2434 - acc: 0.8956
Epoch 593/1500
 - 2s - loss: 0.2422 - acc: 0.8958
Epoch 594/1500
 - 2s - loss: 0.2431 - acc: 0.8954
Epoch 595/1500
 - 2s - loss: 0.2413 - acc: 0.8963
Epoch 596/1500
 - 2s - loss: 0.2417 - acc: 0.8959
Epoch 597/1500
 - 3s - loss: 0.2418 - acc: 0.8960
Epoch 598/1500
 - 2s - loss: 0.2418 - acc: 0.8962
Epoch 599/1500
 - 2s - loss: 0.2411 - acc: 0.8965
Epoch 600/1500
 - 2s - loss: 0.2425 - acc: 0.8957
Epoch 601/1500
 - 3s - loss: 0.2417 - acc: 0.8963
Epoch 602/1500
 - 3s - loss: 0.2416 - acc: 0.8963
Epoch 603/1500
 - 3s - loss: 0.2410 - acc: 0.8965
Epoch 604/1500
 - 2s - loss: 0.2414 - acc: 0.8966
Epoch 605/1500
 - 3s - loss: 0.2410 - acc: 0.8966
Epoch 606/1500
 - 2s - loss: 0.2405 - acc: 0.8967
Epoch 607/1500
 - 2s - loss: 0.2417 - acc: 0.8960
Epoch 608/1500
 - 2s - loss: 0.2416 - acc: 0.8964
Epoch 609/1500
 - 2s - loss: 0.2403 - acc: 0.8965
Epoch 610/1500
 - 3s - loss: 0.2407 - acc: 0.8965
Epoch 611/1500
 - 2s - loss: 0.2400 - acc: 0.8971
Epoch 612/1500
 - 2s - loss: 0.2404 - acc: 0.8968
Epoch 613/1500
 - 2s - loss: 0.2398 - acc: 0.8970
Epoch 614/1500
 - 2s - loss: 0.2404 - acc: 0.8967
Epoch 615/1500
 - 2s - loss: 0.2399 - acc: 0.8967
Epoch 616/1500
 - 3s - loss: 0.2397 - acc: 0.8971
Epoch 617/1500
 - 2s - loss: 0.2404 - acc: 0.8969
Epoch 618/1500
 - 2s - loss: 0.2396 - acc: 0.8973
Epoch 619/1500
 - 2s - loss: 0.2395 - acc: 0.8971
Epoch 620/1500
 - 2s - loss: 0.2393 - acc: 0.8967
Epoch 621/1500
 - 2s - loss: 0.2399 - acc: 0.8964
Epoch 622/1500
 - 2s - loss: 0.2393 - acc: 0.8969
Epoch 623/1500
 - 3s - loss: 0.2394 - acc: 0.8971
Epoch 624/1500
 - 2s - loss: 0.2393 - acc: 0.8968
Epoch 625/1500
 - 2s - loss: 0.2389 - acc: 0.8968
Epoch 626/1500
 - 2s - loss: 0.2390 - acc: 0.8969
Epoch 627/1500
 - 2s - loss: 0.2397 - acc: 0.8969
Epoch 628/1500
 - 2s - loss: 0.2393 - acc: 0.8966
Epoch 629/1500
 - 3s - loss: 0.2388 - acc: 0.8969
Epoch 630/1500
 - 2s - loss: 0.2397 - acc: 0.8968
Epoch 631/1500
 - 2s - loss: 0.2380 - acc: 0.8974
Epoch 632/1500
 - 2s - loss: 0.2392 - acc: 0.8967
Epoch 633/1500
 - 2s - loss: 0.2391 - acc: 0.8967
Epoch 634/1500
 - 2s - loss: 0.2397 - acc: 0.8965
Epoch 635/1500
 - 2s - loss: 0.2379 - acc: 0.8972
Epoch 636/1500
 - 3s - loss: 0.2394 - acc: 0.8962
Epoch 637/1500
 - 2s - loss: 0.2391 - acc: 0.8968
Epoch 638/1500
 - 2s - loss: 0.2388 - acc: 0.8965
Epoch 639/1500
 - 2s - loss: 0.2389 - acc: 0.8967
Epoch 640/1500
 - 2s - loss: 0.2379 - acc: 0.8970
Epoch 641/1500
 - 2s - loss: 0.2381 - acc: 0.8973
Epoch 642/1500
 - 3s - loss: 0.2381 - acc: 0.8971
Epoch 643/1500
 - 2s - loss: 0.2373 - acc: 0.8972
Epoch 644/1500
 - 2s - loss: 0.2383 - acc: 0.8972
Epoch 645/1500
 - 2s - loss: 0.2403 - acc: 0.8966
Epoch 646/1500
 - 2s - loss: 0.2386 - acc: 0.8969
Epoch 647/1500
 - 2s - loss: 0.2398 - acc: 0.8959
Epoch 648/1500
 - 3s - loss: 0.2388 - acc: 0.8967
Epoch 649/1500
 - 3s - loss: 0.2385 - acc: 0.8972
Epoch 650/1500
 - 3s - loss: 0.2395 - acc: 0.8966
Epoch 651/1500
 - 3s - loss: 0.2378 - acc: 0.8974
Epoch 652/1500
 - 3s - loss: 0.2385 - acc: 0.8967
Epoch 653/1500
 - 3s - loss: 0.2386 - acc: 0.8967
Epoch 654/1500
 - 2s - loss: 0.2385 - acc: 0.8968
Epoch 655/1500
 - 3s - loss: 0.2383 - acc: 0.8968
Epoch 656/1500
 - 2s - loss: 0.2383 - acc: 0.8968
Epoch 657/1500
 - 2s - loss: 0.2381 - acc: 0.8964
Epoch 658/1500
 - 2s - loss: 0.2383 - acc: 0.8969
Epoch 659/1500
 - 3s - loss: 0.2385 - acc: 0.8968
Epoch 660/1500
 - 2s - loss: 0.2378 - acc: 0.8967
Epoch 661/1500
 - 3s - loss: 0.2385 - acc: 0.8971
Epoch 662/1500
 - 2s - loss: 0.2380 - acc: 0.8973
Epoch 663/1500
 - 2s - loss: 0.2380 - acc: 0.8969
Epoch 664/1500
 - 3s - loss: 0.2389 - acc: 0.8966
Epoch 665/1500
 - 2s - loss: 0.2375 - acc: 0.8971
Epoch 666/1500
 - 3s - loss: 0.2378 - acc: 0.8974
Epoch 667/1500
 - 3s - loss: 0.2375 - acc: 0.8975
Epoch 668/1500
 - 2s - loss: 0.2377 - acc: 0.8972
Epoch 669/1500
 - 2s - loss: 0.2379 - acc: 0.8970
Epoch 670/1500
 - 2s - loss: 0.2372 - acc: 0.8972
Epoch 671/1500
 - 2s - loss: 0.2372 - acc: 0.8975
Epoch 672/1500
 - 2s - loss: 0.2374 - acc: 0.8971
Epoch 673/1500
 - 2s - loss: 0.2374 - acc: 0.8969
Epoch 674/1500
 - 3s - loss: 0.2376 - acc: 0.8974
Epoch 675/1500
 - 2s - loss: 0.2371 - acc: 0.8967
Epoch 676/1500
 - 3s - loss: 0.2363 - acc: 0.8976
Epoch 677/1500
 - 2s - loss: 0.2365 - acc: 0.8977
Epoch 678/1500
 - 2s - loss: 0.2369 - acc: 0.8976
Epoch 679/1500
 - 3s - loss: 0.2373 - acc: 0.8968
Epoch 680/1500
 - 3s - loss: 0.2374 - acc: 0.8972
Epoch 681/1500
 - 2s - loss: 0.2363 - acc: 0.8975
Epoch 682/1500
 - 2s - loss: 0.2368 - acc: 0.8974
Epoch 683/1500
 - 2s - loss: 0.2371 - acc: 0.8971
Epoch 684/1500
 - 2s - loss: 0.2373 - acc: 0.8971
Epoch 685/1500
 - 2s - loss: 0.2364 - acc: 0.8974
Epoch 686/1500
 - 3s - loss: 0.2368 - acc: 0.8976
Epoch 687/1500
 - 3s - loss: 0.2371 - acc: 0.8970
Epoch 688/1500
 - 2s - loss: 0.2365 - acc: 0.8972
Epoch 689/1500
 - 2s - loss: 0.2368 - acc: 0.8970
Epoch 690/1500
 - 2s - loss: 0.2364 - acc: 0.8976
Epoch 691/1500
 - 2s - loss: 0.2368 - acc: 0.8972
Epoch 692/1500
 - 2s - loss: 0.2365 - acc: 0.8974
Epoch 693/1500
 - 3s - loss: 0.2363 - acc: 0.8974
Epoch 694/1500
 - 2s - loss: 0.2367 - acc: 0.8972
Epoch 695/1500
 - 2s - loss: 0.2365 - acc: 0.8973
Epoch 696/1500
 - 2s - loss: 0.2368 - acc: 0.8976
Epoch 697/1500
 - 2s - loss: 0.2360 - acc: 0.8976
Epoch 698/1500
 - 3s - loss: 0.2357 - acc: 0.8977
Epoch 699/1500
 - 3s - loss: 0.2366 - acc: 0.8971
Epoch 700/1500
 - 2s - loss: 0.2372 - acc: 0.8974
Epoch 701/1500
 - 2s - loss: 0.2365 - acc: 0.8976
Epoch 702/1500
 - 2s - loss: 0.2364 - acc: 0.8973
Epoch 703/1500
 - 2s - loss: 0.2355 - acc: 0.8975
Epoch 704/1500
 - 3s - loss: 0.2369 - acc: 0.8971
Epoch 705/1500
 - 3s - loss: 0.2360 - acc: 0.8976
Epoch 706/1500
 - 3s - loss: 0.2368 - acc: 0.8975
Epoch 707/1500
 - 2s - loss: 0.2362 - acc: 0.8977
Epoch 708/1500
 - 2s - loss: 0.2362 - acc: 0.8975
Epoch 709/1500
 - 2s - loss: 0.2367 - acc: 0.8970
Epoch 710/1500
 - 2s - loss: 0.2365 - acc: 0.8973
Epoch 711/1500
 - 2s - loss: 0.2375 - acc: 0.8973
Epoch 712/1500
 - 3s - loss: 0.2364 - acc: 0.8973
Epoch 713/1500
 - 2s - loss: 0.2364 - acc: 0.8973
Epoch 714/1500
 - 2s - loss: 0.2359 - acc: 0.8978
Epoch 715/1500
 - 2s - loss: 0.2366 - acc: 0.8971
Epoch 716/1500
 - 2s - loss: 0.2365 - acc: 0.8973
Epoch 717/1500
 - 2s - loss: 0.2366 - acc: 0.8973
Epoch 718/1500
 - 2s - loss: 0.2374 - acc: 0.8968
Epoch 719/1500
 - 3s - loss: 0.2355 - acc: 0.8978
Epoch 720/1500
 - 2s - loss: 0.2364 - acc: 0.8974
Epoch 721/1500
 - 2s - loss: 0.2363 - acc: 0.8972
Epoch 722/1500
 - 3s - loss: 0.2360 - acc: 0.8979
Epoch 723/1500
 - 2s - loss: 0.2354 - acc: 0.8978
Epoch 724/1500
 - 3s - loss: 0.2368 - acc: 0.8973
Epoch 725/1500
 - 3s - loss: 0.2359 - acc: 0.8972
Epoch 726/1500
 - 3s - loss: 0.2356 - acc: 0.8977
Epoch 727/1500
 - 3s - loss: 0.2356 - acc: 0.8976
Epoch 728/1500
 - 3s - loss: 0.2357 - acc: 0.8975
Epoch 729/1500
 - 3s - loss: 0.2361 - acc: 0.8974
Epoch 730/1500
 - 2s - loss: 0.2359 - acc: 0.8975
Epoch 731/1500
 - 3s - loss: 0.2358 - acc: 0.8976
Epoch 732/1500
 - 2s - loss: 0.2353 - acc: 0.8980
Epoch 733/1500
 - 2s - loss: 0.2357 - acc: 0.8975
Epoch 734/1500
 - 3s - loss: 0.2360 - acc: 0.8978
Epoch 735/1500
 - 2s - loss: 0.2365 - acc: 0.8974
Epoch 736/1500
 - 2s - loss: 0.2364 - acc: 0.8970
Epoch 737/1500
 - 3s - loss: 0.2345 - acc: 0.8980
Epoch 738/1500
 - 2s - loss: 0.2354 - acc: 0.8978
Epoch 739/1500
 - 2s - loss: 0.2357 - acc: 0.8977
Epoch 740/1500
 - 2s - loss: 0.2354 - acc: 0.8976
Epoch 741/1500
 - 2s - loss: 0.2353 - acc: 0.8977
Epoch 742/1500
 - 2s - loss: 0.2360 - acc: 0.8975
Epoch 743/1500
 - 2s - loss: 0.2354 - acc: 0.8975
Epoch 744/1500
 - 3s - loss: 0.2356 - acc: 0.8976
Epoch 745/1500
 - 3s - loss: 0.2354 - acc: 0.8978
Epoch 746/1500
 - 3s - loss: 0.2357 - acc: 0.8976
Epoch 747/1500
 - 2s - loss: 0.2347 - acc: 0.8980
Epoch 748/1500
 - 2s - loss: 0.2354 - acc: 0.8979
Epoch 749/1500
 - 2s - loss: 0.2356 - acc: 0.8978
Epoch 750/1500
 - 3s - loss: 0.2351 - acc: 0.8979
Epoch 751/1500
 - 2s - loss: 0.2350 - acc: 0.8980
Epoch 752/1500
 - 2s - loss: 0.2353 - acc: 0.8978
Epoch 753/1500
 - 2s - loss: 0.2354 - acc: 0.8976
Epoch 754/1500
 - 2s - loss: 0.2349 - acc: 0.8979
Epoch 755/1500
 - 2s - loss: 0.2355 - acc: 0.8977
Epoch 756/1500
 - 3s - loss: 0.2360 - acc: 0.8976
Epoch 757/1500
 - 3s - loss: 0.2350 - acc: 0.8981
Epoch 758/1500
 - 2s - loss: 0.2366 - acc: 0.8975
Epoch 759/1500
 - 2s - loss: 0.2364 - acc: 0.8974
Epoch 760/1500
 - 2s - loss: 0.2359 - acc: 0.8977
Epoch 761/1500
 - 2s - loss: 0.2366 - acc: 0.8973
Epoch 762/1500
 - 2s - loss: 0.2354 - acc: 0.8981
Epoch 763/1500
 - 3s - loss: 0.2354 - acc: 0.8977
Epoch 764/1500
 - 2s - loss: 0.2354 - acc: 0.8974
Epoch 765/1500
 - 2s - loss: 0.2357 - acc: 0.8979
Epoch 766/1500
 - 2s - loss: 0.2344 - acc: 0.8982
Epoch 767/1500
 - 2s - loss: 0.2359 - acc: 0.8974
Epoch 768/1500
 - 2s - loss: 0.2350 - acc: 0.8977
Epoch 769/1500
 - 3s - loss: 0.2350 - acc: 0.8980
Epoch 770/1500
 - 2s - loss: 0.2355 - acc: 0.8976
Epoch 771/1500
 - 2s - loss: 0.2347 - acc: 0.8975
Epoch 772/1500
 - 2s - loss: 0.2358 - acc: 0.8977
Epoch 773/1500
 - 2s - loss: 0.2348 - acc: 0.8978
Epoch 774/1500
 - 2s - loss: 0.2346 - acc: 0.8979
Epoch 775/1500
 - 2s - loss: 0.2354 - acc: 0.8979
Epoch 776/1500
 - 3s - loss: 0.2353 - acc: 0.8980
Epoch 777/1500
 - 2s - loss: 0.2362 - acc: 0.8981
Epoch 778/1500
 - 2s - loss: 0.2349 - acc: 0.8980
Epoch 779/1500
 - 2s - loss: 0.2358 - acc: 0.8978
Epoch 780/1500
 - 2s - loss: 0.2358 - acc: 0.8976
Epoch 781/1500
 - 2s - loss: 0.2361 - acc: 0.8972
Epoch 782/1500
 - 3s - loss: 0.2361 - acc: 0.8975
Epoch 783/1500
 - 2s - loss: 0.2359 - acc: 0.8975
Epoch 784/1500
 - 2s - loss: 0.2354 - acc: 0.8978
Epoch 785/1500
 - 2s - loss: 0.2352 - acc: 0.8979
Epoch 786/1500
 - 2s - loss: 0.2355 - acc: 0.8980
Epoch 787/1500
 - 2s - loss: 0.2356 - acc: 0.8979
Epoch 788/1500
 - 3s - loss: 0.2356 - acc: 0.8974
Epoch 789/1500
 - 3s - loss: 0.2348 - acc: 0.8982
Epoch 790/1500
 - 2s - loss: 0.2355 - acc: 0.8977
Epoch 791/1500
 - 2s - loss: 0.2353 - acc: 0.8976
Epoch 792/1500
 - 2s - loss: 0.2346 - acc: 0.8983
Epoch 793/1500
 - 2s - loss: 0.2348 - acc: 0.8979
Epoch 794/1500
 - 2s - loss: 0.2355 - acc: 0.8976
Epoch 795/1500
 - 3s - loss: 0.2355 - acc: 0.8980
Epoch 796/1500
 - 2s - loss: 0.2349 - acc: 0.8981
Epoch 797/1500
 - 2s - loss: 0.2356 - acc: 0.8977
Epoch 798/1500
 - 2s - loss: 0.2343 - acc: 0.8979
Epoch 799/1500
 - 3s - loss: 0.2350 - acc: 0.8980
Epoch 800/1500
 - 3s - loss: 0.2352 - acc: 0.8980
Epoch 801/1500
 - 3s - loss: 0.2359 - acc: 0.8977
Epoch 802/1500
 - 3s - loss: 0.2344 - acc: 0.8980
Epoch 803/1500
 - 3s - loss: 0.2348 - acc: 0.8983
Epoch 804/1500
 - 3s - loss: 0.2350 - acc: 0.8981
Epoch 805/1500
 - 2s - loss: 0.2343 - acc: 0.8982
Epoch 806/1500
 - 2s - loss: 0.2344 - acc: 0.8982
Epoch 807/1500
 - 2s - loss: 0.2348 - acc: 0.8981
Epoch 808/1500
 - 3s - loss: 0.2353 - acc: 0.8979
Epoch 809/1500
 - 2s - loss: 0.2339 - acc: 0.8986
Epoch 810/1500
 - 2s - loss: 0.2342 - acc: 0.8984
Epoch 811/1500
 - 2s - loss: 0.2345 - acc: 0.8979
Epoch 812/1500
 - 2s - loss: 0.2356 - acc: 0.8980
Epoch 813/1500
 - 2s - loss: 0.2348 - acc: 0.8982
Epoch 814/1500
 - 3s - loss: 0.2352 - acc: 0.8982
Epoch 815/1500
 - 2s - loss: 0.2348 - acc: 0.8983
Epoch 816/1500
 - 3s - loss: 0.2345 - acc: 0.8981
Epoch 817/1500
 - 2s - loss: 0.2343 - acc: 0.8987
Epoch 818/1500
 - 2s - loss: 0.2346 - acc: 0.8979
Epoch 819/1500
 - 2s - loss: 0.2338 - acc: 0.8986
Epoch 820/1500
 - 3s - loss: 0.2341 - acc: 0.8982
Epoch 821/1500
 - 2s - loss: 0.2345 - acc: 0.8983
Epoch 822/1500
 - 2s - loss: 0.2354 - acc: 0.8977
Epoch 823/1500
 - 2s - loss: 0.2344 - acc: 0.8985
Epoch 824/1500
 - 2s - loss: 0.2340 - acc: 0.8983
Epoch 825/1500
 - 2s - loss: 0.2337 - acc: 0.8984
Epoch 826/1500
 - 2s - loss: 0.2340 - acc: 0.8983
Epoch 827/1500
 - 3s - loss: 0.2344 - acc: 0.8981
Epoch 828/1500
 - 2s - loss: 0.2340 - acc: 0.8982
Epoch 829/1500
 - 2s - loss: 0.2347 - acc: 0.8982
Epoch 830/1500
 - 3s - loss: 0.2335 - acc: 0.8984
Epoch 831/1500
 - 2s - loss: 0.2350 - acc: 0.8978
Epoch 832/1500
 - 2s - loss: 0.2344 - acc: 0.8981
Epoch 833/1500
 - 3s - loss: 0.2346 - acc: 0.8984
Epoch 834/1500
 - 2s - loss: 0.2342 - acc: 0.8984
Epoch 835/1500
 - 2s - loss: 0.2345 - acc: 0.8985
Epoch 836/1500
 - 2s - loss: 0.2351 - acc: 0.8980
Epoch 837/1500
 - 3s - loss: 0.2342 - acc: 0.8981
Epoch 838/1500
 - 2s - loss: 0.2345 - acc: 0.8981
Epoch 839/1500
 - 2s - loss: 0.2347 - acc: 0.8982
Epoch 840/1500
 - 3s - loss: 0.2352 - acc: 0.8979
Epoch 841/1500
 - 2s - loss: 0.2340 - acc: 0.8983
Epoch 842/1500
 - 3s - loss: 0.2352 - acc: 0.8981
Epoch 843/1500
 - 2s - loss: 0.2341 - acc: 0.8982
Epoch 844/1500
 - 2s - loss: 0.2343 - acc: 0.8983
Epoch 845/1500
 - 2s - loss: 0.2338 - acc: 0.8980
Epoch 846/1500
 - 3s - loss: 0.2350 - acc: 0.8981
Epoch 847/1500
 - 2s - loss: 0.2342 - acc: 0.8983
Epoch 848/1500
 - 2s - loss: 0.2338 - acc: 0.8986
Epoch 849/1500
 - 2s - loss: 0.2338 - acc: 0.8983
Epoch 850/1500
 - 2s - loss: 0.2335 - acc: 0.8987
Epoch 851/1500
 - 2s - loss: 0.2338 - acc: 0.8983
Epoch 852/1500
 - 2s - loss: 0.2342 - acc: 0.8982
Epoch 853/1500
 - 3s - loss: 0.2334 - acc: 0.8984
Epoch 854/1500
 - 2s - loss: 0.2339 - acc: 0.8985
Epoch 855/1500
 - 2s - loss: 0.2337 - acc: 0.8985
Epoch 856/1500
 - 2s - loss: 0.2344 - acc: 0.8982
Epoch 857/1500
 - 2s - loss: 0.2348 - acc: 0.8983
Epoch 858/1500
 - 3s - loss: 0.2338 - acc: 0.8986
Epoch 859/1500
 - 3s - loss: 0.2336 - acc: 0.8983
Epoch 860/1500
 - 2s - loss: 0.2337 - acc: 0.8987
Epoch 861/1500
 - 2s - loss: 0.2355 - acc: 0.8981
Epoch 862/1500
 - 2s - loss: 0.2343 - acc: 0.8983
Epoch 863/1500
 - 2s - loss: 0.2339 - acc: 0.8983
Epoch 864/1500
 - 2s - loss: 0.2340 - acc: 0.8988
Epoch 865/1500
 - 3s - loss: 0.2341 - acc: 0.8982
Epoch 866/1500
 - 2s - loss: 0.2343 - acc: 0.8983
Epoch 867/1500
 - 2s - loss: 0.2339 - acc: 0.8984
Epoch 868/1500
 - 3s - loss: 0.2345 - acc: 0.8983
Epoch 869/1500
 - 2s - loss: 0.2339 - acc: 0.8983
Epoch 870/1500
 - 2s - loss: 0.2334 - acc: 0.8985
Epoch 871/1500
 - 3s - loss: 0.2333 - acc: 0.8984
Epoch 872/1500
 - 3s - loss: 0.2337 - acc: 0.8984
Epoch 873/1500
 - 2s - loss: 0.2336 - acc: 0.8987
Epoch 874/1500
 - 2s - loss: 0.2342 - acc: 0.8981
Epoch 875/1500
 - 2s - loss: 0.2339 - acc: 0.8981
Epoch 876/1500
 - 3s - loss: 0.2332 - acc: 0.8988
Epoch 877/1500
 - 3s - loss: 0.2342 - acc: 0.8982
Epoch 878/1500
 - 3s - loss: 0.2341 - acc: 0.8986
Epoch 879/1500
 - 3s - loss: 0.2342 - acc: 0.8986
Epoch 880/1500
 - 3s - loss: 0.2333 - acc: 0.8988
Epoch 881/1500
 - 2s - loss: 0.2332 - acc: 0.8987
Epoch 882/1500
 - 3s - loss: 0.2339 - acc: 0.8989
Epoch 883/1500
 - 2s - loss: 0.2342 - acc: 0.8983
Epoch 884/1500
 - 3s - loss: 0.2335 - acc: 0.8985
Epoch 885/1500
 - 2s - loss: 0.2337 - acc: 0.8985
Epoch 886/1500
 - 2s - loss: 0.2344 - acc: 0.8985
Epoch 887/1500
 - 2s - loss: 0.2344 - acc: 0.8982
Epoch 888/1500
 - 2s - loss: 0.2336 - acc: 0.8985
Epoch 889/1500
 - 2s - loss: 0.2334 - acc: 0.8986
Epoch 890/1500
 - 3s - loss: 0.2331 - acc: 0.8989
Epoch 891/1500
 - 3s - loss: 0.2340 - acc: 0.8984
Epoch 892/1500
 - 2s - loss: 0.2343 - acc: 0.8982
Epoch 893/1500
 - 2s - loss: 0.2336 - acc: 0.8984
Epoch 894/1500
 - 3s - loss: 0.2333 - acc: 0.8986
Epoch 895/1500
 - 2s - loss: 0.2335 - acc: 0.8984
Epoch 896/1500
 - 2s - loss: 0.2328 - acc: 0.8987
Epoch 897/1500
 - 3s - loss: 0.2340 - acc: 0.8985
Epoch 898/1500
 - 2s - loss: 0.2336 - acc: 0.8984
Epoch 899/1500
 - 2s - loss: 0.2341 - acc: 0.8983
Epoch 900/1500
 - 2s - loss: 0.2338 - acc: 0.8985
Epoch 901/1500
 - 2s - loss: 0.2330 - acc: 0.8989
Epoch 902/1500
 - 2s - loss: 0.2334 - acc: 0.8987
Epoch 903/1500
 - 3s - loss: 0.2334 - acc: 0.8987
Epoch 904/1500
 - 3s - loss: 0.2335 - acc: 0.8988
Epoch 905/1500
 - 2s - loss: 0.2342 - acc: 0.8986
Epoch 906/1500
 - 2s - loss: 0.2329 - acc: 0.8992
Epoch 907/1500
 - 2s - loss: 0.2346 - acc: 0.8981
Epoch 908/1500
 - 2s - loss: 0.2338 - acc: 0.8985
Epoch 909/1500
 - 3s - loss: 0.2332 - acc: 0.8985
Epoch 910/1500
 - 3s - loss: 0.2350 - acc: 0.8984
Epoch 911/1500
 - 2s - loss: 0.2332 - acc: 0.8985
Epoch 912/1500
 - 2s - loss: 0.2331 - acc: 0.8986
Epoch 913/1500
 - 2s - loss: 0.2328 - acc: 0.8987
Epoch 914/1500
 - 2s - loss: 0.2333 - acc: 0.8983
Epoch 915/1500
 - 2s - loss: 0.2329 - acc: 0.8991
Epoch 916/1500
 - 3s - loss: 0.2333 - acc: 0.8987
Epoch 917/1500
 - 2s - loss: 0.2333 - acc: 0.8987
Epoch 918/1500
 - 2s - loss: 0.2337 - acc: 0.8984
Epoch 919/1500
 - 2s - loss: 0.2324 - acc: 0.8993
Epoch 920/1500
 - 2s - loss: 0.2337 - acc: 0.8985
Epoch 921/1500
 - 2s - loss: 0.2334 - acc: 0.8985
Epoch 922/1500
 - 2s - loss: 0.2327 - acc: 0.8987
Epoch 923/1500
 - 3s - loss: 0.2330 - acc: 0.8984
Epoch 924/1500
 - 2s - loss: 0.2336 - acc: 0.8985
Epoch 925/1500
 - 2s - loss: 0.2334 - acc: 0.8989
Epoch 926/1500
 - 2s - loss: 0.2329 - acc: 0.8989
Epoch 927/1500
 - 2s - loss: 0.2327 - acc: 0.8990
Epoch 928/1500
 - 2s - loss: 0.2329 - acc: 0.8987
Epoch 929/1500
 - 3s - loss: 0.2335 - acc: 0.8984
Epoch 930/1500
 - 2s - loss: 0.2330 - acc: 0.8988
Epoch 931/1500
 - 3s - loss: 0.2330 - acc: 0.8986
Epoch 932/1500
 - 2s - loss: 0.2330 - acc: 0.8992
Epoch 933/1500
 - 3s - loss: 0.2338 - acc: 0.8987
Epoch 934/1500
 - 3s - loss: 0.2325 - acc: 0.8987
Epoch 935/1500
 - 3s - loss: 0.2330 - acc: 0.8989
Epoch 936/1500
 - 2s - loss: 0.2328 - acc: 0.8989
Epoch 937/1500
 - 2s - loss: 0.2329 - acc: 0.8987
Epoch 938/1500
 - 2s - loss: 0.2326 - acc: 0.8989
Epoch 939/1500
 - 2s - loss: 0.2334 - acc: 0.8985
Epoch 940/1500
 - 2s - loss: 0.2321 - acc: 0.8991
Epoch 941/1500
 - 2s - loss: 0.2329 - acc: 0.8984
Epoch 942/1500
 - 3s - loss: 0.2325 - acc: 0.8990
Epoch 943/1500
 - 2s - loss: 0.2333 - acc: 0.8986
Epoch 944/1500
 - 2s - loss: 0.2331 - acc: 0.8988
Epoch 945/1500
 - 2s - loss: 0.2327 - acc: 0.8988
Epoch 946/1500
 - 2s - loss: 0.2325 - acc: 0.8987
Epoch 947/1500
 - 2s - loss: 0.2327 - acc: 0.8989
Epoch 948/1500
 - 3s - loss: 0.2329 - acc: 0.8984
Epoch 949/1500
 - 2s - loss: 0.2320 - acc: 0.8991
Epoch 950/1500
 - 2s - loss: 0.2329 - acc: 0.8989
Epoch 951/1500
 - 2s - loss: 0.2327 - acc: 0.8992
Epoch 952/1500
 - 3s - loss: 0.2326 - acc: 0.8989
Epoch 953/1500
 - 3s - loss: 0.2323 - acc: 0.8990
Epoch 954/1500
 - 3s - loss: 0.2330 - acc: 0.8988
Epoch 955/1500
 - 3s - loss: 0.2326 - acc: 0.8988
Epoch 956/1500
 - 3s - loss: 0.2320 - acc: 0.8994
Epoch 957/1500
 - 2s - loss: 0.2333 - acc: 0.8988
Epoch 958/1500
 - 2s - loss: 0.2322 - acc: 0.8989
Epoch 959/1500
 - 2s - loss: 0.2322 - acc: 0.8989
Epoch 960/1500
 - 3s - loss: 0.2330 - acc: 0.8990
Epoch 961/1500
 - 3s - loss: 0.2321 - acc: 0.8991
Epoch 962/1500
 - 2s - loss: 0.2328 - acc: 0.8987
Epoch 963/1500
 - 2s - loss: 0.2324 - acc: 0.8992
Epoch 964/1500
 - 2s - loss: 0.2329 - acc: 0.8988
Epoch 965/1500
 - 2s - loss: 0.2331 - acc: 0.8986
Epoch 966/1500
 - 2s - loss: 0.2330 - acc: 0.8989
Epoch 967/1500
 - 3s - loss: 0.2324 - acc: 0.8990
Epoch 968/1500
 - 3s - loss: 0.2323 - acc: 0.8994
Epoch 969/1500
 - 3s - loss: 0.2320 - acc: 0.8990
Epoch 970/1500
 - 3s - loss: 0.2323 - acc: 0.8988
Epoch 971/1500
 - 2s - loss: 0.2322 - acc: 0.8994
Epoch 972/1500
 - 2s - loss: 0.2325 - acc: 0.8992
Epoch 973/1500
 - 3s - loss: 0.2315 - acc: 0.8994
Epoch 974/1500
 - 3s - loss: 0.2326 - acc: 0.8993
Epoch 975/1500
 - 3s - loss: 0.2320 - acc: 0.8991
Epoch 976/1500
 - 3s - loss: 0.2325 - acc: 0.8990
Epoch 977/1500
 - 3s - loss: 0.2332 - acc: 0.8988
Epoch 978/1500
 - 3s - loss: 0.2325 - acc: 0.8990
Epoch 979/1500
 - 3s - loss: 0.2322 - acc: 0.8995
Epoch 980/1500
 - 3s - loss: 0.2320 - acc: 0.8991
Epoch 981/1500
 - 3s - loss: 0.2327 - acc: 0.8989
Epoch 982/1500
 - 2s - loss: 0.2324 - acc: 0.8991
Epoch 983/1500
 - 2s - loss: 0.2325 - acc: 0.8990
Epoch 984/1500
 - 2s - loss: 0.2329 - acc: 0.8991
Epoch 985/1500
 - 3s - loss: 0.2332 - acc: 0.8986
Epoch 986/1500
 - 2s - loss: 0.2320 - acc: 0.8994
Epoch 987/1500
 - 2s - loss: 0.2315 - acc: 0.8993
Epoch 988/1500
 - 2s - loss: 0.2325 - acc: 0.8990
Epoch 989/1500
 - 2s - loss: 0.2322 - acc: 0.8991
Epoch 990/1500
 - 2s - loss: 0.2318 - acc: 0.8990
Epoch 991/1500
 - 2s - loss: 0.2318 - acc: 0.8994
Epoch 992/1500
 - 3s - loss: 0.2315 - acc: 0.8997
Epoch 993/1500
 - 2s - loss: 0.2319 - acc: 0.8994
Epoch 994/1500
 - 2s - loss: 0.2310 - acc: 0.8996
Epoch 995/1500
 - 2s - loss: 0.2315 - acc: 0.8994
Epoch 996/1500
 - 2s - loss: 0.2319 - acc: 0.8993
Epoch 997/1500
 - 2s - loss: 0.2323 - acc: 0.8989
Epoch 998/1500
 - 3s - loss: 0.2318 - acc: 0.8992
Epoch 999/1500
 - 2s - loss: 0.2313 - acc: 0.8992
Epoch 1000/1500
 - 2s - loss: 0.2321 - acc: 0.8988
Epoch 1001/1500
 - 2s - loss: 0.2328 - acc: 0.8990
Epoch 1002/1500
 - 2s - loss: 0.2323 - acc: 0.8992
Epoch 1003/1500
 - 2s - loss: 0.2319 - acc: 0.8992
Epoch 1004/1500
 - 2s - loss: 0.2322 - acc: 0.8991
Epoch 1005/1500
 - 3s - loss: 0.2319 - acc: 0.8991
Epoch 1006/1500
 - 2s - loss: 0.2320 - acc: 0.8991
Epoch 1007/1500
 - 2s - loss: 0.2320 - acc: 0.8991
Epoch 1008/1500
 - 2s - loss: 0.2318 - acc: 0.8991
Epoch 1009/1500
 - 2s - loss: 0.2314 - acc: 0.8994
Epoch 1010/1500
 - 2s - loss: 0.2313 - acc: 0.8993
Epoch 1011/1500
 - 3s - loss: 0.2316 - acc: 0.8992
Epoch 1012/1500
 - 2s - loss: 0.2317 - acc: 0.8992
Epoch 1013/1500
 - 2s - loss: 0.2311 - acc: 0.8994
Epoch 1014/1500
 - 2s - loss: 0.2316 - acc: 0.8992
Epoch 1015/1500
 - 2s - loss: 0.2316 - acc: 0.8989
Epoch 1016/1500
 - 2s - loss: 0.2318 - acc: 0.8991
Epoch 1017/1500
 - 2s - loss: 0.2326 - acc: 0.8989
Epoch 1018/1500
 - 3s - loss: 0.2314 - acc: 0.8993
Epoch 1019/1500
 - 2s - loss: 0.2311 - acc: 0.8994
Epoch 1020/1500
 - 2s - loss: 0.2321 - acc: 0.8993
Epoch 1021/1500
 - 2s - loss: 0.2322 - acc: 0.8993
Epoch 1022/1500
 - 2s - loss: 0.2326 - acc: 0.8995
Epoch 1023/1500
 - 2s - loss: 0.2314 - acc: 0.8991
Epoch 1024/1500
 - 3s - loss: 0.2319 - acc: 0.8992
Epoch 1025/1500
 - 2s - loss: 0.2309 - acc: 0.8997
Epoch 1026/1500
 - 2s - loss: 0.2305 - acc: 0.8998
Epoch 1027/1500
 - 3s - loss: 0.2331 - acc: 0.8990
Epoch 1028/1500
 - 3s - loss: 0.2316 - acc: 0.8995
Epoch 1029/1500
 - 3s - loss: 0.2315 - acc: 0.8995
Epoch 1030/1500
 - 3s - loss: 0.2318 - acc: 0.8991
Epoch 1031/1500
 - 3s - loss: 0.2315 - acc: 0.8990
Epoch 1032/1500
 - 2s - loss: 0.2307 - acc: 0.8997
Epoch 1033/1500
 - 2s - loss: 0.2318 - acc: 0.8995
Epoch 1034/1500
 - 2s - loss: 0.2311 - acc: 0.8993
Epoch 1035/1500
 - 2s - loss: 0.2322 - acc: 0.8991
Epoch 1036/1500
 - 3s - loss: 0.2309 - acc: 0.8998
Epoch 1037/1500
 - 3s - loss: 0.2316 - acc: 0.8996
Epoch 1038/1500
 - 2s - loss: 0.2317 - acc: 0.8995
Epoch 1039/1500
 - 2s - loss: 0.2317 - acc: 0.8999
Epoch 1040/1500
 - 3s - loss: 0.2317 - acc: 0.8993
Epoch 1041/1500
 - 2s - loss: 0.2313 - acc: 0.8996
Epoch 1042/1500
 - 2s - loss: 0.2309 - acc: 0.8998
Epoch 1043/1500
 - 3s - loss: 0.2316 - acc: 0.8996
Epoch 1044/1500
 - 2s - loss: 0.2325 - acc: 0.8988
Epoch 1045/1500
 - 2s - loss: 0.2305 - acc: 0.9000
Epoch 1046/1500
 - 2s - loss: 0.2319 - acc: 0.8996
Epoch 1047/1500
 - 2s - loss: 0.2324 - acc: 0.8993
Epoch 1048/1500
 - 3s - loss: 0.2317 - acc: 0.8995
Epoch 1049/1500
 - 3s - loss: 0.2314 - acc: 0.8994
Epoch 1050/1500
 - 2s - loss: 0.2311 - acc: 0.8994
Epoch 1051/1500
 - 2s - loss: 0.2312 - acc: 0.8995
Epoch 1052/1500
 - 3s - loss: 0.2310 - acc: 0.8995
Epoch 1053/1500
 - 3s - loss: 0.2315 - acc: 0.8994
Epoch 1054/1500
 - 2s - loss: 0.2322 - acc: 0.8990
Epoch 1055/1500
 - 2s - loss: 0.2306 - acc: 0.9000
Epoch 1056/1500
 - 3s - loss: 0.2313 - acc: 0.8995
Epoch 1057/1500
 - 2s - loss: 0.2305 - acc: 0.8993
Epoch 1058/1500
 - 2s - loss: 0.2310 - acc: 0.8995
Epoch 1059/1500
 - 2s - loss: 0.2326 - acc: 0.8992
Epoch 1060/1500
 - 2s - loss: 0.2318 - acc: 0.8995
Epoch 1061/1500
 - 2s - loss: 0.2306 - acc: 0.8994
Epoch 1062/1500
 - 3s - loss: 0.2314 - acc: 0.8993
Epoch 1063/1500
 - 2s - loss: 0.2308 - acc: 0.8998
Epoch 1064/1500
 - 2s - loss: 0.2316 - acc: 0.8995
Epoch 1065/1500
 - 3s - loss: 0.2306 - acc: 0.9000
Epoch 1066/1500
 - 3s - loss: 0.2303 - acc: 0.8998
Epoch 1067/1500
 - 2s - loss: 0.2310 - acc: 0.8994
Epoch 1068/1500
 - 3s - loss: 0.2306 - acc: 0.8998
Epoch 1069/1500
 - 3s - loss: 0.2306 - acc: 0.8997
Epoch 1070/1500
 - 2s - loss: 0.2313 - acc: 0.8991
Epoch 1071/1500
 - 2s - loss: 0.2303 - acc: 0.8999
Epoch 1072/1500
 - 2s - loss: 0.2305 - acc: 0.8995
Epoch 1073/1500
 - 3s - loss: 0.2305 - acc: 0.8996
Epoch 1074/1500
 - 3s - loss: 0.2320 - acc: 0.8994
Epoch 1075/1500
 - 3s - loss: 0.2305 - acc: 0.8996
Epoch 1076/1500
 - 2s - loss: 0.2316 - acc: 0.8992
Epoch 1077/1500
 - 2s - loss: 0.2311 - acc: 0.8996
Epoch 1078/1500
 - 3s - loss: 0.2309 - acc: 0.8996
Epoch 1079/1500
 - 3s - loss: 0.2309 - acc: 0.8997
Epoch 1080/1500
 - 3s - loss: 0.2303 - acc: 0.8998
Epoch 1081/1500
 - 3s - loss: 0.2307 - acc: 0.8999
Epoch 1082/1500
 - 2s - loss: 0.2305 - acc: 0.8995
Epoch 1083/1500
 - 3s - loss: 0.2318 - acc: 0.8995
Epoch 1084/1500
 - 3s - loss: 0.2302 - acc: 0.9000
Epoch 1085/1500
 - 2s - loss: 0.2303 - acc: 0.8999
Epoch 1086/1500
 - 3s - loss: 0.2312 - acc: 0.8998
Epoch 1087/1500
 - 3s - loss: 0.2310 - acc: 0.8995
Epoch 1088/1500
 - 2s - loss: 0.2300 - acc: 0.9001
Epoch 1089/1500
 - 2s - loss: 0.2316 - acc: 0.8993
Epoch 1090/1500
 - 2s - loss: 0.2314 - acc: 0.8995
Epoch 1091/1500
 - 2s - loss: 0.2308 - acc: 0.8998
Epoch 1092/1500
 - 2s - loss: 0.2311 - acc: 0.8997
Epoch 1093/1500
 - 3s - loss: 0.2303 - acc: 0.8999
Epoch 1094/1500
 - 3s - loss: 0.2305 - acc: 0.8997
Epoch 1095/1500
 - 2s - loss: 0.2308 - acc: 0.9001
Epoch 1096/1500
 - 2s - loss: 0.2303 - acc: 0.9002
Epoch 1097/1500
 - 2s - loss: 0.2306 - acc: 0.8999
Epoch 1098/1500
 - 2s - loss: 0.2298 - acc: 0.8999
Epoch 1099/1500
 - 2s - loss: 0.2306 - acc: 0.8998
Epoch 1100/1500
 - 3s - loss: 0.2299 - acc: 0.8999
Epoch 1101/1500
 - 2s - loss: 0.2305 - acc: 0.8995
Epoch 1102/1500
 - 3s - loss: 0.2311 - acc: 0.8997
Epoch 1103/1500
 - 3s - loss: 0.2301 - acc: 0.8998
Epoch 1104/1500
 - 3s - loss: 0.2311 - acc: 0.8993
Epoch 1105/1500
 - 3s - loss: 0.2308 - acc: 0.8997
Epoch 1106/1500
 - 3s - loss: 0.2305 - acc: 0.8994
Epoch 1107/1500
 - 3s - loss: 0.2305 - acc: 0.8997
Epoch 1108/1500
 - 3s - loss: 0.2297 - acc: 0.8997
Epoch 1109/1500
 - 2s - loss: 0.2311 - acc: 0.8992
Epoch 1110/1500
 - 2s - loss: 0.2303 - acc: 0.8996
Epoch 1111/1500
 - 3s - loss: 0.2307 - acc: 0.8995
Epoch 1112/1500
 - 3s - loss: 0.2301 - acc: 0.8996
Epoch 1113/1500
 - 3s - loss: 0.2304 - acc: 0.8995
Epoch 1114/1500
 - 2s - loss: 0.2312 - acc: 0.8999
Epoch 1115/1500
 - 2s - loss: 0.2300 - acc: 0.9000
Epoch 1116/1500
 - 2s - loss: 0.2313 - acc: 0.8994
Epoch 1117/1500
 - 2s - loss: 0.2300 - acc: 0.9002
Epoch 1118/1500
 - 2s - loss: 0.2304 - acc: 0.8996
Epoch 1119/1500
 - 3s - loss: 0.2316 - acc: 0.8995
Epoch 1120/1500
 - 2s - loss: 0.2307 - acc: 0.8996
Epoch 1121/1500
 - 2s - loss: 0.2302 - acc: 0.8999
Epoch 1122/1500
 - 2s - loss: 0.2300 - acc: 0.8997
Epoch 1123/1500
 - 2s - loss: 0.2307 - acc: 0.8994
Epoch 1124/1500
 - 2s - loss: 0.2302 - acc: 0.8995
Epoch 1125/1500
 - 3s - loss: 0.2305 - acc: 0.8996
Epoch 1126/1500
 - 2s - loss: 0.2306 - acc: 0.8998
Epoch 1127/1500
 - 2s - loss: 0.2309 - acc: 0.8996
Epoch 1128/1500
 - 2s - loss: 0.2308 - acc: 0.8996
Epoch 1129/1500
 - 3s - loss: 0.2299 - acc: 0.8995
Epoch 1130/1500
 - 3s - loss: 0.2310 - acc: 0.8996
Epoch 1131/1500
 - 2s - loss: 0.2304 - acc: 0.8999
Epoch 1132/1500
 - 3s - loss: 0.2301 - acc: 0.9000
Epoch 1133/1500
 - 2s - loss: 0.2298 - acc: 0.8999
Epoch 1134/1500
 - 2s - loss: 0.2304 - acc: 0.8998
Epoch 1135/1500
 - 2s - loss: 0.2297 - acc: 0.9000
Epoch 1136/1500
 - 2s - loss: 0.2303 - acc: 0.8998
Epoch 1137/1500
 - 2s - loss: 0.2312 - acc: 0.8995
Epoch 1138/1500
 - 3s - loss: 0.2307 - acc: 0.8997
Epoch 1139/1500
 - 2s - loss: 0.2309 - acc: 0.8994
Epoch 1140/1500
 - 2s - loss: 0.2311 - acc: 0.8994
Epoch 1141/1500
 - 2s - loss: 0.2309 - acc: 0.8998
Epoch 1142/1500
 - 2s - loss: 0.2314 - acc: 0.8993
Epoch 1143/1500
 - 2s - loss: 0.2303 - acc: 0.8994
Epoch 1144/1500
 - 2s - loss: 0.2306 - acc: 0.8994
Epoch 1145/1500
 - 3s - loss: 0.2301 - acc: 0.8998
Epoch 1146/1500
 - 2s - loss: 0.2308 - acc: 0.8994
Epoch 1147/1500
 - 2s - loss: 0.2306 - acc: 0.8996
Epoch 1148/1500
 - 2s - loss: 0.2314 - acc: 0.8991
Epoch 1149/1500
 - 2s - loss: 0.2303 - acc: 0.8994
Epoch 1150/1500
 - 2s - loss: 0.2304 - acc: 0.8992
Epoch 1151/1500
 - 3s - loss: 0.2305 - acc: 0.8996
Epoch 1152/1500
 - 2s - loss: 0.2307 - acc: 0.8998
Epoch 1153/1500
 - 2s - loss: 0.2307 - acc: 0.8994
Epoch 1154/1500
 - 2s - loss: 0.2300 - acc: 0.9000
Epoch 1155/1500
 - 2s - loss: 0.2301 - acc: 0.9000
Epoch 1156/1500
 - 2s - loss: 0.2298 - acc: 0.8995
Epoch 1157/1500
 - 3s - loss: 0.2307 - acc: 0.8996
Epoch 1158/1500
 - 3s - loss: 0.2308 - acc: 0.8992
Epoch 1159/1500
 - 2s - loss: 0.2294 - acc: 0.9000
Epoch 1160/1500
 - 2s - loss: 0.2299 - acc: 0.8998
Epoch 1161/1500
 - 2s - loss: 0.2297 - acc: 0.8999
Epoch 1162/1500
 - 2s - loss: 0.2298 - acc: 0.8996
Epoch 1163/1500
 - 2s - loss: 0.2308 - acc: 0.8997
Epoch 1164/1500
 - 3s - loss: 0.2297 - acc: 0.8999
Epoch 1165/1500
 - 3s - loss: 0.2298 - acc: 0.9001
Epoch 1166/1500
 - 3s - loss: 0.2302 - acc: 0.8993
Epoch 1167/1500
 - 2s - loss: 0.2294 - acc: 0.9001
Epoch 1168/1500
 - 2s - loss: 0.2296 - acc: 0.9003
Epoch 1169/1500
 - 2s - loss: 0.2299 - acc: 0.8997
Epoch 1170/1500
 - 3s - loss: 0.2304 - acc: 0.8999
Epoch 1171/1500
 - 2s - loss: 0.2304 - acc: 0.8996
Epoch 1172/1500
 - 2s - loss: 0.2298 - acc: 0.9001
Epoch 1173/1500
 - 2s - loss: 0.2297 - acc: 0.8999
Epoch 1174/1500
 - 2s - loss: 0.2301 - acc: 0.9001
Epoch 1175/1500
 - 3s - loss: 0.2303 - acc: 0.8996
Epoch 1176/1500
 - 3s - loss: 0.2292 - acc: 0.8997
Epoch 1177/1500
 - 3s - loss: 0.2299 - acc: 0.8996
Epoch 1178/1500
 - 3s - loss: 0.2297 - acc: 0.8999
Epoch 1179/1500
 - 3s - loss: 0.2302 - acc: 0.8999
Epoch 1180/1500
 - 3s - loss: 0.2309 - acc: 0.8992
Epoch 1181/1500
 - 3s - loss: 0.2302 - acc: 0.8999
Epoch 1182/1500
 - 3s - loss: 0.2301 - acc: 0.8996
Epoch 1183/1500
 - 3s - loss: 0.2297 - acc: 0.8999
Epoch 1184/1500
 - 2s - loss: 0.2300 - acc: 0.9000
Epoch 1185/1500
 - 2s - loss: 0.2301 - acc: 0.8998
Epoch 1186/1500
 - 3s - loss: 0.2303 - acc: 0.8994
Epoch 1187/1500
 - 3s - loss: 0.2302 - acc: 0.8998
Epoch 1188/1500
 - 3s - loss: 0.2302 - acc: 0.8995
Epoch 1189/1500
 - 3s - loss: 0.2308 - acc: 0.8994
Epoch 1190/1500
 - 2s - loss: 0.2298 - acc: 0.8998
Epoch 1191/1500
 - 2s - loss: 0.2297 - acc: 0.8998
Epoch 1192/1500
 - 3s - loss: 0.2294 - acc: 0.9002
Epoch 1193/1500
 - 2s - loss: 0.2295 - acc: 0.9001
Epoch 1194/1500
 - 2s - loss: 0.2296 - acc: 0.8998
Epoch 1195/1500
 - 3s - loss: 0.2309 - acc: 0.8997
Epoch 1196/1500
 - 3s - loss: 0.2306 - acc: 0.8997
Epoch 1197/1500
 - 2s - loss: 0.2302 - acc: 0.8997
Epoch 1198/1500
 - 2s - loss: 0.2304 - acc: 0.8995
Epoch 1199/1500
 - 3s - loss: 0.2296 - acc: 0.8996
Epoch 1200/1500
 - 2s - loss: 0.2307 - acc: 0.8995
Epoch 1201/1500
 - 2s - loss: 0.2293 - acc: 0.9001
Epoch 1202/1500
 - 3s - loss: 0.2304 - acc: 0.8997
Epoch 1203/1500
 - 2s - loss: 0.2303 - acc: 0.8995
Epoch 1204/1500
 - 2s - loss: 0.2301 - acc: 0.8998
Epoch 1205/1500
 - 2s - loss: 0.2293 - acc: 0.8999
Epoch 1206/1500
 - 2s - loss: 0.2302 - acc: 0.8994
Epoch 1207/1500
 - 3s - loss: 0.2293 - acc: 0.9004
Epoch 1208/1500
 - 3s - loss: 0.2300 - acc: 0.8998
Epoch 1209/1500
 - 2s - loss: 0.2300 - acc: 0.8997
Epoch 1210/1500
 - 2s - loss: 0.2305 - acc: 0.8997
Epoch 1211/1500
 - 2s - loss: 0.2297 - acc: 0.8998
Epoch 1212/1500
 - 2s - loss: 0.2305 - acc: 0.8997
Epoch 1213/1500
 - 2s - loss: 0.2298 - acc: 0.8998
Epoch 1214/1500
 - 3s - loss: 0.2297 - acc: 0.8998
Epoch 1215/1500
 - 2s - loss: 0.2297 - acc: 0.8999
Epoch 1216/1500
 - 2s - loss: 0.2293 - acc: 0.8998
Epoch 1217/1500
 - 2s - loss: 0.2298 - acc: 0.8995
Epoch 1218/1500
 - 2s - loss: 0.2289 - acc: 0.9003
Epoch 1219/1500
 - 2s - loss: 0.2294 - acc: 0.9001
Epoch 1220/1500
 - 2s - loss: 0.2302 - acc: 0.8994
Epoch 1221/1500
 - 3s - loss: 0.2299 - acc: 0.8998
Epoch 1222/1500
 - 2s - loss: 0.2298 - acc: 0.8998
Epoch 1223/1500
 - 2s - loss: 0.2295 - acc: 0.9001
Epoch 1224/1500
 - 2s - loss: 0.2297 - acc: 0.8996
Epoch 1225/1500
 - 2s - loss: 0.2297 - acc: 0.8996
Epoch 1226/1500
 - 2s - loss: 0.2290 - acc: 0.9002
Epoch 1227/1500
 - 3s - loss: 0.2297 - acc: 0.8998
Epoch 1228/1500
 - 2s - loss: 0.2298 - acc: 0.9000
Epoch 1229/1500
 - 2s - loss: 0.2293 - acc: 0.9001
Epoch 1230/1500
 - 2s - loss: 0.2306 - acc: 0.8995
Epoch 1231/1500
 - 2s - loss: 0.2291 - acc: 0.9003
Epoch 1232/1500
 - 3s - loss: 0.2300 - acc: 0.8996
Epoch 1233/1500
 - 3s - loss: 0.2300 - acc: 0.8998
Epoch 1234/1500
 - 3s - loss: 0.2293 - acc: 0.8997
Epoch 1235/1500
 - 2s - loss: 0.2298 - acc: 0.8999
Epoch 1236/1500
 - 2s - loss: 0.2300 - acc: 0.8994
Epoch 1237/1500
 - 3s - loss: 0.2294 - acc: 0.8999
Epoch 1238/1500
 - 2s - loss: 0.2304 - acc: 0.8997
Epoch 1239/1500
 - 2s - loss: 0.2302 - acc: 0.8994
Epoch 1240/1500
 - 3s - loss: 0.2296 - acc: 0.8994
Epoch 1241/1500
 - 2s - loss: 0.2293 - acc: 0.9000
Epoch 1242/1500
 - 2s - loss: 0.2297 - acc: 0.8999
Epoch 1243/1500
 - 2s - loss: 0.2297 - acc: 0.8995
Epoch 1244/1500
 - 2s - loss: 0.2293 - acc: 0.8996
Epoch 1245/1500
 - 2s - loss: 0.2296 - acc: 0.9000
Epoch 1246/1500
 - 3s - loss: 0.2303 - acc: 0.8995
Epoch 1247/1500
 - 2s - loss: 0.2292 - acc: 0.9002
Epoch 1248/1500
 - 2s - loss: 0.2296 - acc: 0.8999
Epoch 1249/1500
 - 2s - loss: 0.2292 - acc: 0.8998
Epoch 1250/1500
 - 2s - loss: 0.2302 - acc: 0.8996
Epoch 1251/1500
 - 2s - loss: 0.2299 - acc: 0.8995
Epoch 1252/1500
 - 2s - loss: 0.2297 - acc: 0.9000
Epoch 1253/1500
 - 3s - loss: 0.2294 - acc: 0.9001
Epoch 1254/1500
 - 3s - loss: 0.2299 - acc: 0.9001
Epoch 1255/1500
 - 3s - loss: 0.2306 - acc: 0.8994
Epoch 1256/1500
 - 3s - loss: 0.2305 - acc: 0.8993
Epoch 1257/1500
 - 3s - loss: 0.2294 - acc: 0.8999
Epoch 1258/1500
 - 2s - loss: 0.2302 - acc: 0.8997
Epoch 1259/1500
 - 3s - loss: 0.2301 - acc: 0.8998
Epoch 1260/1500
 - 2s - loss: 0.2297 - acc: 0.8999
Epoch 1261/1500
 - 2s - loss: 0.2300 - acc: 0.9000
Epoch 1262/1500
 - 2s - loss: 0.2300 - acc: 0.8997
Epoch 1263/1500
 - 2s - loss: 0.2306 - acc: 0.8998
Epoch 1264/1500
 - 2s - loss: 0.2307 - acc: 0.8995
Epoch 1265/1500
 - 3s - loss: 0.2293 - acc: 0.9001
Epoch 1266/1500
 - 2s - loss: 0.2303 - acc: 0.8996
Epoch 1267/1500
 - 3s - loss: 0.2305 - acc: 0.8998
Epoch 1268/1500
 - 2s - loss: 0.2291 - acc: 0.9001
Epoch 1269/1500
 - 2s - loss: 0.2299 - acc: 0.8996
Epoch 1270/1500
 - 2s - loss: 0.2296 - acc: 0.8995
Epoch 1271/1500
 - 2s - loss: 0.2296 - acc: 0.9001
Epoch 1272/1500
 - 3s - loss: 0.2300 - acc: 0.9000
Epoch 1273/1500
 - 2s - loss: 0.2295 - acc: 0.8998
Epoch 1274/1500
 - 2s - loss: 0.2292 - acc: 0.9002
Epoch 1275/1500
 - 2s - loss: 0.2291 - acc: 0.9002
Epoch 1276/1500
 - 2s - loss: 0.2296 - acc: 0.9002
Epoch 1277/1500
 - 2s - loss: 0.2295 - acc: 0.8997
Epoch 1278/1500
 - 3s - loss: 0.2295 - acc: 0.8996
Epoch 1279/1500
 - 2s - loss: 0.2289 - acc: 0.9002
Epoch 1280/1500
 - 3s - loss: 0.2295 - acc: 0.8998
Epoch 1281/1500
 - 3s - loss: 0.2304 - acc: 0.8998
Epoch 1282/1500
 - 2s - loss: 0.2295 - acc: 0.9003
Epoch 1283/1500
 - 2s - loss: 0.2299 - acc: 0.8999
Epoch 1284/1500
 - 3s - loss: 0.2296 - acc: 0.9000
Epoch 1285/1500
 - 2s - loss: 0.2291 - acc: 0.9000
Epoch 1286/1500
 - 2s - loss: 0.2306 - acc: 0.8993
Epoch 1287/1500
 - 3s - loss: 0.2294 - acc: 0.8999
Epoch 1288/1500
 - 2s - loss: 0.2291 - acc: 0.8999
Epoch 1289/1500
 - 2s - loss: 0.2294 - acc: 0.9001
Epoch 1290/1500
 - 2s - loss: 0.2300 - acc: 0.8998
Epoch 1291/1500
 - 3s - loss: 0.2299 - acc: 0.8998
Epoch 1292/1500
 - 2s - loss: 0.2286 - acc: 0.9003
Epoch 1293/1500
 - 2s - loss: 0.2298 - acc: 0.8998
Epoch 1294/1500
 - 2s - loss: 0.2292 - acc: 0.9002
Epoch 1295/1500
 - 2s - loss: 0.2301 - acc: 0.8999
Epoch 1296/1500
 - 2s - loss: 0.2299 - acc: 0.8997
Epoch 1297/1500
 - 3s - loss: 0.2288 - acc: 0.9001
Epoch 1298/1500
 - 2s - loss: 0.2288 - acc: 0.9000
Epoch 1299/1500
 - 2s - loss: 0.2296 - acc: 0.8996
Epoch 1300/1500
 - 2s - loss: 0.2297 - acc: 0.8997
Epoch 1301/1500
 - 2s - loss: 0.2301 - acc: 0.8998
Epoch 1302/1500
 - 3s - loss: 0.2295 - acc: 0.8997
Epoch 1303/1500
 - 2s - loss: 0.2290 - acc: 0.9001
Epoch 1304/1500
 - 3s - loss: 0.2298 - acc: 0.8999
Epoch 1305/1500
 - 2s - loss: 0.2285 - acc: 0.9004
Epoch 1306/1500
 - 2s - loss: 0.2300 - acc: 0.8996
Epoch 1307/1500
 - 2s - loss: 0.2295 - acc: 0.8997
Epoch 1308/1500
 - 2s - loss: 0.2294 - acc: 0.9000
Epoch 1309/1500
 - 2s - loss: 0.2293 - acc: 0.8999
Epoch 1310/1500
 - 3s - loss: 0.2290 - acc: 0.9001
Epoch 1311/1500
 - 2s - loss: 0.2296 - acc: 0.9001
Epoch 1312/1500
 - 2s - loss: 0.2287 - acc: 0.8999
Epoch 1313/1500
 - 3s - loss: 0.2293 - acc: 0.9002
Epoch 1314/1500
 - 3s - loss: 0.2295 - acc: 0.8997
Epoch 1315/1500
 - 2s - loss: 0.2292 - acc: 0.8999
Epoch 1316/1500
 - 3s - loss: 0.2302 - acc: 0.8998
Epoch 1317/1500
 - 3s - loss: 0.2288 - acc: 0.9003
Epoch 1318/1500
 - 2s - loss: 0.2291 - acc: 0.8999
Epoch 1319/1500
 - 2s - loss: 0.2287 - acc: 0.9000
Epoch 1320/1500
 - 2s - loss: 0.2290 - acc: 0.9000
Epoch 1321/1500
 - 2s - loss: 0.2292 - acc: 0.9003
Epoch 1322/1500
 - 3s - loss: 0.2293 - acc: 0.8999
Epoch 1323/1500
 - 3s - loss: 0.2288 - acc: 0.9001
Epoch 1324/1500
 - 2s - loss: 0.2291 - acc: 0.9003
Epoch 1325/1500
 - 2s - loss: 0.2288 - acc: 0.9002
Epoch 1326/1500
 - 2s - loss: 0.2299 - acc: 0.9001
Epoch 1327/1500
 - 2s - loss: 0.2290 - acc: 0.9002
Epoch 1328/1500
 - 2s - loss: 0.2296 - acc: 0.9003
Epoch 1329/1500
 - 3s - loss: 0.2287 - acc: 0.9001
Epoch 1330/1500
 - 3s - loss: 0.2297 - acc: 0.8998
Epoch 1331/1500
 - 3s - loss: 0.2298 - acc: 0.8999
Epoch 1332/1500
 - 3s - loss: 0.2294 - acc: 0.9001
Epoch 1333/1500
 - 3s - loss: 0.2297 - acc: 0.9002
Epoch 1334/1500
 - 2s - loss: 0.2311 - acc: 0.8993
Epoch 1335/1500
 - 3s - loss: 0.2284 - acc: 0.9003
Epoch 1336/1500
 - 2s - loss: 0.2296 - acc: 0.8997
Epoch 1337/1500
 - 2s - loss: 0.2297 - acc: 0.9000
Epoch 1338/1500
 - 2s - loss: 0.2293 - acc: 0.9001
Epoch 1339/1500
 - 2s - loss: 0.2291 - acc: 0.9002
Epoch 1340/1500
 - 2s - loss: 0.2290 - acc: 0.8999
Epoch 1341/1500
 - 2s - loss: 0.2287 - acc: 0.9003
Epoch 1342/1500
 - 3s - loss: 0.2291 - acc: 0.8998
Epoch 1343/1500
 - 2s - loss: 0.2289 - acc: 0.8998
Epoch 1344/1500
 - 2s - loss: 0.2288 - acc: 0.9002
Epoch 1345/1500
 - 2s - loss: 0.2292 - acc: 0.9000
Epoch 1346/1500
 - 2s - loss: 0.2293 - acc: 0.9000
Epoch 1347/1500
 - 2s - loss: 0.2288 - acc: 0.9001
Epoch 1348/1500
 - 3s - loss: 0.2289 - acc: 0.9003
Epoch 1349/1500
 - 2s - loss: 0.2287 - acc: 0.9005
Epoch 1350/1500
 - 2s - loss: 0.2294 - acc: 0.9000
Epoch 1351/1500
 - 3s - loss: 0.2288 - acc: 0.8999
Epoch 1352/1500
 - 2s - loss: 0.2287 - acc: 0.9004
Epoch 1353/1500
 - 3s - loss: 0.2291 - acc: 0.9001
Epoch 1354/1500
 - 2s - loss: 0.2289 - acc: 0.9001
Epoch 1355/1500
 - 3s - loss: 0.2297 - acc: 0.8997
Epoch 1356/1500
 - 2s - loss: 0.2289 - acc: 0.9001
Epoch 1357/1500
 - 2s - loss: 0.2292 - acc: 0.8995
Epoch 1358/1500
 - 2s - loss: 0.2293 - acc: 0.8999
Epoch 1359/1500
 - 2s - loss: 0.2295 - acc: 0.9001
Epoch 1360/1500
 - 2s - loss: 0.2292 - acc: 0.9001
Epoch 1361/1500
 - 3s - loss: 0.2283 - acc: 0.9002
Epoch 1362/1500
 - 3s - loss: 0.2289 - acc: 0.9001
Epoch 1363/1500
 - 3s - loss: 0.2293 - acc: 0.8996
Epoch 1364/1500
 - 3s - loss: 0.2288 - acc: 0.9002
Epoch 1365/1500
 - 3s - loss: 0.2286 - acc: 0.9001
Epoch 1366/1500
 - 2s - loss: 0.2285 - acc: 0.9001
Epoch 1367/1500
 - 3s - loss: 0.2291 - acc: 0.9001
Epoch 1368/1500
 - 2s - loss: 0.2291 - acc: 0.9001
Epoch 1369/1500
 - 2s - loss: 0.2294 - acc: 0.9002
Epoch 1370/1500
 - 2s - loss: 0.2287 - acc: 0.9001
Epoch 1371/1500
 - 2s - loss: 0.2299 - acc: 0.8995
Epoch 1372/1500
 - 2s - loss: 0.2303 - acc: 0.8998
Epoch 1373/1500
 - 2s - loss: 0.2287 - acc: 0.9002
Epoch 1374/1500
 - 3s - loss: 0.2287 - acc: 0.9004
Epoch 1375/1500
 - 2s - loss: 0.2291 - acc: 0.9001
Epoch 1376/1500
 - 2s - loss: 0.2297 - acc: 0.8998
Epoch 1377/1500
 - 2s - loss: 0.2282 - acc: 0.9004
Epoch 1378/1500
 - 2s - loss: 0.2292 - acc: 0.9002
Epoch 1379/1500
 - 2s - loss: 0.2287 - acc: 0.9004
Epoch 1380/1500
 - 3s - loss: 0.2290 - acc: 0.8997
Epoch 1381/1500
 - 2s - loss: 0.2304 - acc: 0.8995
Epoch 1382/1500
 - 2s - loss: 0.2291 - acc: 0.8999
Epoch 1383/1500
 - 2s - loss: 0.2289 - acc: 0.8998
Epoch 1384/1500
 - 3s - loss: 0.2290 - acc: 0.9003
Epoch 1385/1500
 - 3s - loss: 0.2291 - acc: 0.8998
Epoch 1386/1500
 - 3s - loss: 0.2288 - acc: 0.9002
Epoch 1387/1500
 - 3s - loss: 0.2284 - acc: 0.9005
Epoch 1388/1500
 - 2s - loss: 0.2288 - acc: 0.9001
Epoch 1389/1500
 - 2s - loss: 0.2298 - acc: 0.8998
Epoch 1390/1500
 - 2s - loss: 0.2287 - acc: 0.9000
Epoch 1391/1500
 - 2s - loss: 0.2295 - acc: 0.9003
Epoch 1392/1500
 - 2s - loss: 0.2292 - acc: 0.8999
Epoch 1393/1500
 - 3s - loss: 0.2289 - acc: 0.8999
Epoch 1394/1500
 - 2s - loss: 0.2285 - acc: 0.9001
Epoch 1395/1500
 - 2s - loss: 0.2289 - acc: 0.9000
Epoch 1396/1500
 - 2s - loss: 0.2291 - acc: 0.9001
Epoch 1397/1500
 - 2s - loss: 0.2298 - acc: 0.9001
Epoch 1398/1500
 - 2s - loss: 0.2301 - acc: 0.8997
Epoch 1399/1500
 - 3s - loss: 0.2288 - acc: 0.9000
Epoch 1400/1500
 - 2s - loss: 0.2289 - acc: 0.9001
Epoch 1401/1500
 - 2s - loss: 0.2286 - acc: 0.9003
Epoch 1402/1500
 - 3s - loss: 0.2293 - acc: 0.9002
Epoch 1403/1500
 - 3s - loss: 0.2290 - acc: 0.9001
Epoch 1404/1500
 - 3s - loss: 0.2290 - acc: 0.9000
Epoch 1405/1500
 - 3s - loss: 0.2286 - acc: 0.9004
Epoch 1406/1500
 - 3s - loss: 0.2300 - acc: 0.8996
Epoch 1407/1500
 - 3s - loss: 0.2288 - acc: 0.9002
Epoch 1408/1500
 - 3s - loss: 0.2288 - acc: 0.9000
Epoch 1409/1500
 - 3s - loss: 0.2288 - acc: 0.9003
Epoch 1410/1500
 - 2s - loss: 0.2290 - acc: 0.9001
Epoch 1411/1500
 - 2s - loss: 0.2292 - acc: 0.9003
Epoch 1412/1500
 - 3s - loss: 0.2292 - acc: 0.8998
Epoch 1413/1500
 - 2s - loss: 0.2290 - acc: 0.8999
Epoch 1414/1500
 - 2s - loss: 0.2287 - acc: 0.9000
Epoch 1415/1500
 - 2s - loss: 0.2297 - acc: 0.8998
Epoch 1416/1500
 - 2s - loss: 0.2286 - acc: 0.9003
Epoch 1417/1500
 - 2s - loss: 0.2286 - acc: 0.9004
Epoch 1418/1500
 - 3s - loss: 0.2291 - acc: 0.9001
Epoch 1419/1500
 - 2s - loss: 0.2282 - acc: 0.9003
Epoch 1420/1500
 - 2s - loss: 0.2285 - acc: 0.9003
Epoch 1421/1500
 - 2s - loss: 0.2287 - acc: 0.9001
Epoch 1422/1500
 - 2s - loss: 0.2293 - acc: 0.9000
Epoch 1423/1500
 - 2s - loss: 0.2285 - acc: 0.9003
Epoch 1424/1500
 - 2s - loss: 0.2288 - acc: 0.8998
Epoch 1425/1500
 - 3s - loss: 0.2283 - acc: 0.9003
Epoch 1426/1500
 - 2s - loss: 0.2288 - acc: 0.8999
Epoch 1427/1500
 - 2s - loss: 0.2285 - acc: 0.9003
Epoch 1428/1500
 - 2s - loss: 0.2290 - acc: 0.8999
Epoch 1429/1500
 - 2s - loss: 0.2284 - acc: 0.9002
Epoch 1430/1500
 - 2s - loss: 0.2286 - acc: 0.8997
Epoch 1431/1500
 - 3s - loss: 0.2289 - acc: 0.9000
Epoch 1432/1500
 - 3s - loss: 0.2294 - acc: 0.8996
Epoch 1433/1500
 - 2s - loss: 0.2284 - acc: 0.9004
Epoch 1434/1500
 - 2s - loss: 0.2287 - acc: 0.9006
Epoch 1435/1500
 - 2s - loss: 0.2282 - acc: 0.9002
Epoch 1436/1500
 - 2s - loss: 0.2290 - acc: 0.9000
Epoch 1437/1500
 - 3s - loss: 0.2283 - acc: 0.9004
Epoch 1438/1500
 - 3s - loss: 0.2285 - acc: 0.9000
Epoch 1439/1500
 - 2s - loss: 0.2289 - acc: 0.9000
Epoch 1440/1500
 - 2s - loss: 0.2286 - acc: 0.9001
Epoch 1441/1500
 - 2s - loss: 0.2304 - acc: 0.8995
Epoch 1442/1500
 - 2s - loss: 0.2292 - acc: 0.9002
Epoch 1443/1500
 - 2s - loss: 0.2290 - acc: 0.9003
Epoch 1444/1500
 - 3s - loss: 0.2286 - acc: 0.9000
Epoch 1445/1500
 - 2s - loss: 0.2287 - acc: 0.9000
Epoch 1446/1500
 - 2s - loss: 0.2289 - acc: 0.8999
Epoch 1447/1500
 - 2s - loss: 0.2289 - acc: 0.9000
Epoch 1448/1500
 - 2s - loss: 0.2289 - acc: 0.8999
Epoch 1449/1500
 - 2s - loss: 0.2288 - acc: 0.9002
Epoch 1450/1500
 - 3s - loss: 0.2288 - acc: 0.9001
Epoch 1451/1500
 - 3s - loss: 0.2289 - acc: 0.8999
Epoch 1452/1500
 - 3s - loss: 0.2295 - acc: 0.8996
Epoch 1453/1500
 - 2s - loss: 0.2290 - acc: 0.9002
Epoch 1454/1500
 - 3s - loss: 0.2288 - acc: 0.8999
Epoch 1455/1500
 - 3s - loss: 0.2287 - acc: 0.8997
Epoch 1456/1500
 - 3s - loss: 0.2293 - acc: 0.9002
Epoch 1457/1500
 - 3s - loss: 0.2280 - acc: 0.9003
Epoch 1458/1500
 - 2s - loss: 0.2285 - acc: 0.9002
Epoch 1459/1500
 - 2s - loss: 0.2282 - acc: 0.9004
Epoch 1460/1500
 - 2s - loss: 0.2284 - acc: 0.9002
Epoch 1461/1500
 - 2s - loss: 0.2291 - acc: 0.9000
Epoch 1462/1500
 - 2s - loss: 0.2288 - acc: 0.9005
Epoch 1463/1500
 - 3s - loss: 0.2283 - acc: 0.9001
Epoch 1464/1500
 - 2s - loss: 0.2284 - acc: 0.9004
Epoch 1465/1500
 - 2s - loss: 0.2285 - acc: 0.9001
Epoch 1466/1500
 - 2s - loss: 0.2287 - acc: 0.8996
Epoch 1467/1500
 - 2s - loss: 0.2286 - acc: 0.9004
Epoch 1468/1500
 - 2s - loss: 0.2284 - acc: 0.9000
Epoch 1469/1500
 - 3s - loss: 0.2286 - acc: 0.8999
Epoch 1470/1500
 - 2s - loss: 0.2282 - acc: 0.9002
Epoch 1471/1500
 - 2s - loss: 0.2287 - acc: 0.9004
Epoch 1472/1500
 - 2s - loss: 0.2285 - acc: 0.9004
Epoch 1473/1500
 - 2s - loss: 0.2289 - acc: 0.9003
Epoch 1474/1500
 - 2s - loss: 0.2284 - acc: 0.9006
Epoch 1475/1500
 - 2s - loss: 0.2285 - acc: 0.9000
Epoch 1476/1500
 - 3s - loss: 0.2283 - acc: 0.9003
Epoch 1477/1500
 - 3s - loss: 0.2288 - acc: 0.8999
Epoch 1478/1500
 - 3s - loss: 0.2280 - acc: 0.9006
Epoch 1479/1500
 - 3s - loss: 0.2279 - acc: 0.9004
Epoch 1480/1500
 - 3s - loss: 0.2284 - acc: 0.9000
Epoch 1481/1500
 - 3s - loss: 0.2293 - acc: 0.8997
Epoch 1482/1500
 - 3s - loss: 0.2290 - acc: 0.8997
Epoch 1483/1500
 - 3s - loss: 0.2287 - acc: 0.9001
Epoch 1484/1500
 - 3s - loss: 0.2284 - acc: 0.9001
Epoch 1485/1500
 - 2s - loss: 0.2288 - acc: 0.9003
Epoch 1486/1500
 - 2s - loss: 0.2292 - acc: 0.8995
Epoch 1487/1500
 - 2s - loss: 0.2286 - acc: 0.9000
Epoch 1488/1500
 - 3s - loss: 0.2283 - acc: 0.9002
Epoch 1489/1500
 - 2s - loss: 0.2280 - acc: 0.9002
Epoch 1490/1500
 - 2s - loss: 0.2290 - acc: 0.9000
Epoch 1491/1500
 - 2s - loss: 0.2289 - acc: 0.9002
Epoch 1492/1500
 - 2s - loss: 0.2289 - acc: 0.9004
Epoch 1493/1500
 - 2s - loss: 0.2283 - acc: 0.9003
Epoch 1494/1500
 - 3s - loss: 0.2290 - acc: 0.9002
Epoch 1495/1500
 - 3s - loss: 0.2277 - acc: 0.9006
Epoch 1496/1500
 - 2s - loss: 0.2282 - acc: 0.9007
Epoch 1497/1500
 - 2s - loss: 0.2284 - acc: 0.9001
Epoch 1498/1500
 - 2s - loss: 0.2287 - acc: 0.8999
Epoch 1499/1500
 - 2s - loss: 0.2285 - acc: 0.9000
Epoch 1500/1500
 - 2s - loss: 0.2285 - acc: 0.9001
116811/116811 [==============================] - 1s 11us/step

acc: 89.94%
In [25]:
from keras.utils.np_utils import to_categorical
print ("Train Predictions:")

scores = model.evaluate(X_train, yt_kr)
print("Score - \n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
tpredictions = model.predict_classes(X_train)
tprediction_ = np.argmax(to_categorical(tpredictions), axis = 1)

print ("Test Predictions:")
scores = model.evaluate(X_test, ytest_kr)
print("Score - \n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

predictions = model.predict_classes(X_test)
prediction_ = np.argmax(to_categorical(predictions), axis = 1)
#prediction_ = labelencoder_X.inverse_transform(prediction_)
#y_test_orig = labelencoder_X.inverse_transform(y_test)

#print(confusion_matrix(y_test, prediction_))
#    print(confusion_matrix(y_train, y_train_pred))
df_cmtr = pd.DataFrame(confusion_matrix(y_train, tprediction_), index = ['cancelled', 'failed', 'successful'],
                  columns = ['cancelled(p)', 'failed(p)', 'successful(p)'])
df_cm = pd.DataFrame(confusion_matrix(y_test, prediction_), index = ['cancelled', 'failed', 'successful'],
                  columns = ['cancelled(p)', 'failed(p)', 'successful(p)'])
plt.figure(figsize=(14,10))
s_title ='NN Confusion Matrix'
plt.suptitle(s_title, fontsize=16)

plt.subplot(2,2,1)
plt.gca().set_title('Train Data')
sns.heatmap(df_cmtr, annot=True, cmap=plt.cm.Reds)
plt.subplot(2,2,2)
plt.gca().set_title('Test Data')
sns.heatmap(df_cm, annot=True, cmap=plt.cm.Reds)
plt.show()

#for i, j in zip(prediction_ , y_test_orig):
#    print( " the nn predict {}, and the species to find is {}".format(i,j))
Train Predictions:
467243/467243 [==============================] - 5s 10us/step
Score - 
acc: 89.99%
Test Predictions:
116811/116811 [==============================] - 1s 10us/step
Score - 
acc: 89.94%

Conclusion

  • Deliverable:

    • Selected Random Forest(Tuned) as it gives accuracy of 0.88, std-0.00 and better on predicting on whole dataset compare to others.
    • Retrained model on whole dataset(Train + Test) with 10-KFold.
  • Improvements:

  • Feature Engineering and Hypothesis Generation can still change set of input features to optimize model. Bag of words, TF-IDF etc...
  • Interpreting model predictions would add extra benefits. Relative Feature Inportance, Permutation Importance, Partial Feature Dependencies, SHAP Values
  • Baysian Optimization can be used to perform Hyperparameter Tuning.

Drawbacks - RF

  • Model interpretability: Random forest models are not all that interpretable; they are like black boxes.
  • For very large data sets, the size of the trees can take up a lot of memory.
  • It can tend to overfit, so you should tune the hyperparameters.